Ancient to Modern: Upgrading Nearly a Decade of Plone in Public Radio
This is a modal window.
The media could not be loaded, either because the server or network failed or because the format is not supported.
Formal Metadata
Title |
| |
Title of Series | ||
Number of Parts | 45 | |
Author | ||
License | CC Attribution - NonCommercial 3.0 Unported: You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal and non-commercial purpose as long as the work is attributed to the author in the manner specified by the author or licensor. | |
Identifiers | 10.5446/48045 (DOI) | |
Publisher | ||
Release Date | ||
Language | ||
Production Place | Bristol, UK |
Content Metadata
Subject Area | |
Genre |
Plone Conference 201442 / 45
1
2
10
11
17
19
20
27
31
39
00:00
Plane (geometry)ModemAreaProjective planeWorkstation <Musikinstrument>CloningWebsiteBitLecture/ConferenceMeeting/Interview
00:32
Computer-generated imageryChi-squared distributionCommutative propertyBinary fileCloud computingCloningString (computer science)Computer animationMeeting/Interview
01:05
Musical ensemblePoint (geometry)Physical systemWebsiteBitCASE <Informatik>Graph coloringQuicksortNumbering schemeDrop (liquid)Level (video gaming)Group actionContent (media)Type theorySelf-organizationProjective planeLecture/Conference
02:07
Computer-generated imageryProjective planeWebsiteSelf-organizationContent (media)Line (geometry)QuicksortCodeRight angleType theoryData dictionaryDatabaseInformationMultiplication signRevision controlBitScripting languageError messageQuery languageService (economics)Software developerRelational databaseObject (grammar)Structural loadCASE <Informatik>Modal logicProcess (computing)WindowGroup actionData storage deviceSequelContext awarenessMusical ensembleView (database)Boss CorporationConfiguration spaceDirectory serviceAreaExtreme programmingFormal languageLecture/ConferenceMeeting/Interview
04:56
Computer-generated imageryContent (media)Network topologyStaff (military)Data modelData managementType theoryWebsiteCodeHuman migrationReading (process)WritingMiniDiscData structureConfiguration spaceCodeBoss CorporationType theoryWebsiteMultiplication signComputer configurationMathematicsContent (media)Endliche ModelltheorieData managementClient (computing)Category of beingObject (grammar)InformationMereologyDatabaseFile systemRelational databaseQuicksortStaff (military)Computer fileWhiteboardBitFeasibility studyMiniDiscData structureProjective planeHuman migrationRevision controlNetwork topologyPhysical systemExistenceProduct (business)Hecke operatorInjektivitätPoint (geometry)Event horizonGroup actionTransponderAdditionCloningMeeting/InterviewComputer animationLecture/Conference
09:23
Content (media)Type theoryBroadcast programmingComputer-generated imageryFluidHuman migrationObject (grammar)EmailMedical imagingContent (media)Human migrationMultiplication signMobile appWebsiteObject (grammar)Event horizonPresentation of a groupChainInformationOrder (biology)Different (Kate Ryan album)Type theoryTrailUniform resource locatorComputer architectureQuicksortProjective planeLevel (video gaming)Sheaf (mathematics)Diffuser (automotive)System callProcess (computing)Intrusion detection systemCovering spaceMatching (graph theory)Data dictionaryScheduling (computing)Condition numberPoint (geometry)Group actionSemiconductor memoryConnected spaceField (computer science)DatabaseMappingServer (computing)Web 2.0Heegaard splittingLecture/Conference
14:58
Human migrationMusical ensembleMultiplication signWebsiteSemiconductor memoryTerm (mathematics)Structural loadBefehlsprozessorLogical constantServer (computing)Extension (kinesiology)Lecture/Conference
15:31
Multiplication signNP-hardGastropod shellClient (computing)Service (economics)BitWebsiteRevision controlTablet computerLevel (video gaming)Mobile WebInterface (computing)InformationElement (mathematics)SpacetimeDifferent (Kate Ryan album)Meeting/InterviewLecture/ConferenceComputer animation
16:24
Medical imagingMultiplication signPortletContent (media)QuicksortNumberElectronic mailing listWebsiteCodeObject (grammar)Block (periodic table)Standard deviationDifferent (Kate Ryan album)Forcing (mathematics)Web pagePositional notation
17:25
Web pageAerodynamicsContent (media)View (database)Template (C++)Web browserContent providerContext awarenessVolumenvisualisierungArtistic renderingCovering spaceResultantText editorFunctional (mathematics)UsabilityTerm (mathematics)Sheaf (mathematics)Internet service providerShared memoryObject (grammar)TesselationView (database)Greatest elementProduct (business)Type theoryQuicksortBlock (periodic table)BitAttribute grammarInformationChainWrapper (data mining)Electronic mailing listPortletTemplate (C++)Core dumpAreaContent (media)Context awarenessCovering spaceWeb pageCASE <Informatik>MathematicsCartesian coordinate systemWritingPhysical systemVolumenvisualisierungWeb browserStandard deviationWebsiteDifferent (Kate Ryan album)Revision controlFile formatProcess (computing)Interface (computing)Data structureCodeGeneric programmingAdaptive behaviorElement (mathematics)Normal (geometry)Right angleSocial classMessage passingComa BerenicesSource codeCloningPlanningMultiplicationMultiplication signLecture/Conference
24:12
EmulationView (database)Web browserMessage passingTemplate (C++)TesselationCovering spaceContent (media)Web browserView (database)Set (mathematics)Physical systemInternet service providerResultantFunctional (mathematics)Electronic mailing listCore dumpHome pageContext awarenessComputer animationLecture/Conference
25:34
Content providerTemplate (C++)Default (computer science)Standard deviationElectronic visual displayLibrary catalogContext awarenessSystem callPhysical systemMarkup languageContent (media)outputDecision theoryLanding pageContent (media)Web pageTesselationView (database)Template (C++)Web browserInternet service providerWebsiteDifferent (Kate Ryan album)Musical ensembleContext awarenessSheaf (mathematics)Fluid staticsDecision theoryDefault (computer science)Row (database)BuildingCellular automatonMarkup languagePhysical systemResultantLibrary catalogObject (grammar)CausalityBitQuicksortProcess (computing)outputElectronic mailing listOrder (biology)Projective planeCovering spaceDependent and independent variablesMultiplication signWritingCASE <Informatik>Social classSemiconductor memoryProgrammer (hardware)Computer programmingPerspective (visual)CodeComputer animationLecture/Conference
28:38
ConsistencyView (database)Client (computing)Electronic visual displayMobile WebCodeService (economics)DebuggerMobile appWebsiteDecision theoryWeb browserAdaptive behaviorImplementationContent (media)Type theoryDifferent (Kate Ryan album)Computer programmingPlanningSystem callClient (computing)Interface (computing)Link (knot theory)Template (C++)HypermediaWeb pageRight angleView (database)Attribute grammarPiIntelligent NetworkMultiplication signProjective planeBit rateLatent heatPlug-in (computing)Meeting/InterviewLecture/Conference
31:29
Web browserStructural loadWeb pageSingle-precision floating-point formatStructural loadBroadcasting (networking)Streaming mediaMoment (mathematics)WebsiteWindowWeb pageProcess (computing)CASE <Informatik>Mobile appLecture/Conference
32:19
Web pageStreaming mediaWeb pageMenu (computing)WebsiteTrailPoint (geometry)Multiplication signSoftware developerHydraulic jumpComputer animationLecture/ConferenceMeeting/Interview
33:12
Newton's law of universal gravitationOrder (biology)Form (programming)TrailWebsiteWeb pagePlug-in (computing)Structural loadWeb browserLink (knot theory)MereologyBitScripting languageCloningInformationProcess (computing)GoogolAnalytic setLecture/Conference
34:38
DisintegrationBlogContent (media)10 (number)CodeContent (media)WebsiteBlogObject (grammar)Graph (mathematics)Subject indexingInformationWordResultantBlock (periodic table)Internet service providerQuicksortProjective planeTask (computing)INTEGRAL1 (number)Thread (computing)AdditionProcess (computing)Complex (psychology)Different (Kate Ryan album)Hecke operatorClient (computing)Set (mathematics)Basis <Mathematik>Library catalogServer (computing)TouchscreenCASE <Informatik>SpacetimeProduct (business)Turtle graphicsCuboidElectronic mailing listMultiplication signLevel (video gaming)HookingLecture/Conference
37:20
CodeWebsiteProjective planeClient (computing)BlogResultantMultiplication signDifferent (Kate Ryan album)Subject indexingMereology1 (number)Process (computing)Interactive televisionAdaptive behaviorTerm (mathematics)Software developerProduct (business)Software maintenanceDecision theoryComplex (psychology)BitHecke operatorAxiom of choiceNumberPresentation of a groupSpeech synthesisWeb 2.0Materialization (paranormal)Wave packetGastropod shellElectronic mailing listScripting languagePlanningNP-hardCASE <Informatik>CloningComa BerenicesComputer clusterQuicksortLibrary catalogDependent and independent variablesOntologyLecture/ConferenceMeeting/Interview
42:22
Computer animation
Transcript: English(auto-generated)
00:06
All right, welcome everybody. My name is Chris Ewing and I'm here presenting a project that I did under the auspices of Jazz Carta, a clone firm in the United States located in Boston. I myself am from the Seattle area and the project that we worked on
00:22
was for a public radio station called KCRW that's actually located down in Santa Monica in California. So I'm gonna take you on a little bit of a journey today and the journey is gonna start out with a website that was born long ago and came to the world of clone around 2004, 2005 and has grown up
00:41
over the last decade or so with Plone. The story really exposes the strengths of Plone and some of the things that it does particularly well and so it's a success story for Plone in that respect but it's also a cautionary tale about the kinds of weird traps that you can get yourself into within cautious customizations
01:02
or perhaps overly aggressive attempts to change things. We're gonna start back in the beginning. Plone 2.0 around 2005 or so, KCRW migrated their existing site which had been done in some other system into Plone
01:20
and created themselves something that looked a little bit like this. You can see that it's got a lovely modern design for 2004, 2005, a very nice color scheme, so on and so forth. It has that sort of typical top level navigation that we often see in Plone sites of that era with little drop down menus that contain sub navigation,
01:41
often really well restricted sub navigation so your path through the site is really clear. You can always find exactly what you're looking for and it's very easy to get around. Plone 2.0 is perfectly viable technology for that age and over the years it was managed to be upgraded until about Plone 2.5 or so.
02:01
It went through the transition into archetypes, content types, it upgraded to about 2.5 and then it got kind of stuck. But like many quality projects of a certain age, Plone 2.5 was still perfectly viable and the site was running and operating really quite well up until recently.
02:21
So really a new theme or maybe new content organization doesn't justify the full cost of an upgrade but there's really more to this story and the reason why we ended up actually working on creating an upgrade for these folks. And that has a lot to do with the code that was underneath this site. If you take a look at this line right here,
02:42
it's sort of an interesting line. This is in the midst of a method that gets called every time that you view a particular kind of object in this site. In this case it's for a radio show. And when you view that show, there's a process that goes on that grabs a file out of the portal skins
03:01
directory, which actually is a little bit of zSQL query language stuff. And it does a SQL query to an external service that was holding a whole bunch of the data about these archetypes, content types. Originally the developers who had created this site when it was first created had done this.
03:22
They used the SQL window storage project because they wanted some of the information to be available via a relational database to like Perl scripts and other tools that they had preexisting that they were using to expose some of this data to RSS feeds and other such tools.
03:40
However, as it turned out over the years, that stuff got abandoned and unused and so the SQL externals that were present in this content type became kind of an appendage that was no longer necessary. Unfortunately, appendages that are no longer necessary are not always easy to cut off without killing the patient
04:01
and so they were stuck with methods like this one that I was showing you here, where you go to an external zSQL script, you grab some data from a database, you bring that data back into your website and you start populating a Python dictionary with it and then you spend some more time populating more
04:22
of the Python dictionary with it and then way down at the end of this big 200 line long method, you get to a place where that dictionary that you've just populated is stored persistently on the content object that you're viewing which means, of course, that every time somebody
04:42
comes to view this thing, there's action being taken against the database which opens up the door for all kinds of conflict errors and retries which really puts a tremendous amount of load onto the database and causes things to slow down quite a bit. Obviously, this isn't the most stable of configurations for code and this is only one place
05:03
in which things were like this and this kind of rickety instability makes for very angry boss type people and that means that it's time for a change. This is how they were finally convinced that it was time to go ahead and take a look at upgrading their site to something else. So the question comes, we've been using Plone
05:22
for the past decade, is it really the thing that we wanna stay with? There are other options out there for us. We could use some other system. So why would we wanna stay with Plone? Well, we really have a site here that has a lot of heterogeneous content in a very deep structured tree.
05:40
We have shows, we have episodes of those shows, we have segments within those episodes, we have users, we have events, we have all kinds of other things and this sort of heterogeneous content tree is something that Plone does very well and also there's a whole bunch of this content. Since the site's been around since the late 1990s,
06:02
there's well over 300,000, 400,000, 500,000 objects in this database and moving that quantity of stuff into a relational database can sometimes be a little daunting. You find that often relationally backed websites don't end up performing particularly well with that many objects in them.
06:22
In addition, the client has a really broad editorial staff. They've got lots of people working in the site, lots of people adding content to the site and not all those people need to add content all in the same places. People who are the producers for a particular show really should have the ability to edit their show but not somebody else's.
06:41
That makes a lot of sense and Plone is very good at doing those sorts of things. Also, Plone features this kind of in-place management model which makes it very easy for people who are brought on board to find the content that they're trying to edit, to add new content to the site in the appropriate places and to manage things in a way that makes a certain amount of sense to them.
07:03
Plus, because they'd already been using Plone for a decade, there was a great deal of familiarity among the staff for the tool which made it nice and easy for us to stay there and the final strength of Plone is a transmogrifier and I'll tell you a little bit about why that's a strength in just a second here. After we've decided that Plone is in fact
07:21
the right tool for us and the one that we're gonna stay with, we decide, do we wanna upgrade the site? It's a 2.5 site. There's a feasible upgrade path from 2.5 up to 4.3. Do we wanna do that or do we want to migrate the site completely? Well, when we asked ourselves this question, we really wanted to be able to use dexterity content types for the new site
07:41
rather than the existing archetypes content types and at the time, there wasn't a migration path directly between those or an upgrade path directly from one type to the other. Additionally, these kinds of heavy customizations that I was showing you earlier meant that there was a lot of code in that database that was really not of best practice and preserving that code
08:01
and preserving the database bound artifacts of that code wouldn't necessarily be in the best interest of the project. We really wanted to start over and get a fresh start and so this means that Transmogrifier is the right tool for us. We started out by doing a migration out of the original database. We're starting from an old 2.5 site.
08:22
We're reading the content out of that using the quintagroup.transmogrifier product. We wanted to marshal that content out to the file system so that we could take a look at the files that we were getting and the data that we were getting and inspect it before attempting to reimport it into the new site. And we decided to marshal it out as JSON simply because that's a heck of a lot easier to read
08:42
than the normal XML that you get if you just marshal using the built-in tools. So we added on collective.jsonify to allow us to do that sort of work. We then wrote that structure out to the disk using a modified version of the quintagroup.transmogrifier writer and that gave us a pipeline that looked something like this,
09:01
a nice, simple sort of few-step pipeline that grabbed the content by walking the content tree and then wrote it out to disk as JSON files. We end up with a file structure that looks something like this and a whole bunch of JSON files representing different content objects that have just boatloads of properties on them and information that was part of their existence
09:21
in the former website. The next step then was to go ahead and define new content types. And this was taken on by Alec Mitchell, who was the project lead for it. He took care of creating some dexterity-based content types and some custom shared behaviors that would be used amongst these content types,
09:41
things like the eye-airings behavior that was present for anything that might have an air date or either in the past or in the present or in the future. The eye-scheduled behavior that is something like pwnapp-event but really bound to items that have a schedule that needs to be kept track of,
10:00
a show that shows up every morning at nine o'clock and every evening at 1230 or something like that. Eye-content images, a lot of the content that they had had cover-type images that were meant to be shown through the UI, images with different aspect ratios to show up in different places in the site, and this behavior was created in order to manage those images
10:20
for the various content objects that would need them. There were several others, but I won't go into any of those in depth. Once the content types were prepared for us, then we were ready to go ahead and start working on the inbound migration. We would read in that JSON that we had read out before, written out before, and use that information to map the data
10:41
from the old types onto the new types. We used a split pipeline so that we could handle the import chain for individual content types differently from each other because each one of them had very specific needs, including the ability to remap locations. And so we wanted to be able to move content objects
11:00
from the location in which they were in the old site into a new location in the new site, sort of updating the information architecture of the site at the same time as we're moving the content across. Once we've built stuff to remap those locations, then we can go ahead and create the new content objects or identify existing ones and update them. This reflects the fact
11:20
that it was really important for our customer to be able to do this import process repeatably. We could take out the content at a particular stage from the live running site, import it into a staging site that they could mess around with, that they could alter the data for, and then we could re-export from the existing site and re-import and not wipe out the stuff that they had done,
11:40
but simply update and modify it over time. Final step, of course, is to reconnect related objects. The existing site had lots of user objects that represented radio show hosts or guests, people who were producers, people who were speakers, so on and so forth, and those needed to be connected back to the shows or the episodes or the segments that they belonged with.
12:03
There are a couple of features of Transmogrifier that I want to call out in taking a look at this. In particular, the splitter section that comes from Plone's Transmogrifier stuff allows us to build these sorts of customized pipelines for individual types of content. You can make conditions on your splitter
12:23
that will allow you to say, only if a content object matches this particular condition should it go through this section of the pipeline. If it doesn't match this, then ignore it and pass it through. And that allows you to set up all kinds of nice little sub-pipelines that apply to different kinds of content.
12:41
Another feature comes from our need to be able to map people to the objects that they used to belong to. You can't really guarantee that people are gonna be in the database at the point where the content object they belong to is supposed to be imported. They aren't necessarily, they haven't already been imported.
13:02
And so in order to make connections between these things, we used one of the features of Transmogrifier, which is to have annotations on the Transmogrifier object itself. This means that as we are actually importing people, we can build a map over time of those people's previous PloneSite ID
13:21
to the UIDs that we're creating for the new people objects that we're creating in the new website. We store this map on the Transmogrifier import object itself, and then that means that the same annotation in that same dictionary is available to us in any other pipeline segment that comes along somewhere later in the pipeline than the one that created it.
13:42
And that allows us to say, by the time some content piece comes in, if it has IDs in it that correspond to people, we can take those IDs, we can look them up in the mapping that we created as we were importing people, and we can find those people objects, put their UIDs into the correct fields
14:00
on the content objects to which they should be related, and magically join the two of them back together again, which was really quite convenient and nice to do. I think there are other approaches to importing site data from one place to another, but I think one of the great strengths of Transmogrifier is providing this kind of ability for you to do things in stages
14:21
and keep memory of the actions that you've taken so that you can react to them intelligently at a later point. So once we've got this whole pipeline set up, it's time for us to go ahead and run the migration, and of course with the amount of content that we had, and the way that we were doing importing of images, for images we were actually, rather than trying to pipe over the full data of the image
14:42
which would have been a huge amount of memory taken up, we actually just made HTTP calls out to the old website and pulled the images straight off of the old web server through HTTP and re-imported them into Plone, saving us a great deal of memory but costing us time. It meant, of course, that running a migration
15:00
for the site by the time we were doing the full thing ran about 36 hours or so. It was quite an extensive migration in terms of its time, but the lovely thing about Transmogrifier and this generator-based approach to migrations is that despite the amount of time it took, it never exceeded a certain constant band of memory and CPU usage on the server, which meant the server was able to handle that load
15:22
and was able to perform the migration without really sweating, and that's a nice thing. So now we have content, but it's really not all that good-looking, so it's time to go ahead and make it pretty. The client engaged the services of a company out in New York City called Hard Candy Shell that does front-end designs
15:41
for some really big-end customers, people like the Wall Street Journal and other sites, and they provided us with a bunch of mock-ups of how the sites ought to look, and these mock-ups were really quite lovely. They were not only well-formatted and featuring a nice, really modern-looking and stylish design, but they were also responsive, which meant that if you viewed the site on a tablet,
16:02
you got a slightly modified version with a little bit more space opened up around the UI elements that they'd be easy to touch, and if you got down to the level of mobile devices, you really got quite a different interface. Things folded up well, and you got all the same information presented in a much more mobile-friendly fashion.
16:22
Additionally, as we took a look at this design, we discovered over time that one of the real strong features of this design was the fact that it featured these kinds of block-oriented content that would appear over and over in different places, and the blocks would feature a show or maybe an episode of a show or a segment from with an episode or something like that,
16:43
and they would appear in different places, and depending on where they appeared, they would look different. So they'd be the same content item, but they would have a different appearance depending on where in the layout they came from. They might be large and have a nice sort of marquee image on them.
17:00
They might be these very square things that have the play buttons on them. They might even appear as individual list items inside what appears for all intents and purposes to be a portlet in the kind of standard Plone notation, but they're all really the same content objects underneath, and as we looked at this, this brought to mind for us a number of other requirements that were gonna influence how we approached
17:22
building the actual theme and code for this site. Some of the pages needed to be composable. Things like the front page of the site, they needed to have the ability for content editors to come in and update, to change the content manually with human intent on those pages, and this brought to mind for us
17:41
the product collective.cover, which allows you to set up these sort of template-based pages that then individual editors can come in and update portions of the page on their own. On the other hand, there would be other pages that were gonna be pre-built, really static in terms of their overall structure, but within sections of those pages,
18:00
the lists of content, the chunks that would show up in these blocked areas would be dynamically generated from each other, and that really brings to mind a more traditional Plone approach of creating like a custom browser view and some page templates that would end up showing the results of those browser views. But standard Plone browser views and tiles
18:21
are really quite different things, and we don't wanna write the same code over and over again, put the same page templates and the same HTML fragments in a bunch of different places, because that just means if they decide down the road somewhere that they wanna change how one of these things is built, we'd end up having to edit it in a bunch of different places. We really wanna try and keep this dry principle
18:41
for the code that we're writing. And as we looked around, we realized there was a package way down underneath Plone that would allow us to do this. It's this package called zoop.contentprovider, and it provides, for those of you who are unaware of this, kind of the underpinnings of what has become the Portlet structure in Plone, and there's kind of a reputation about Portlets
19:01
in terms of their ease of use and so on and so forth, but this underlying idea of something that's like a browser view but really represents just a fragment of a page rather than an entire page isn't a bad idea. It's a pretty good idea. And the way they're implemented is as a multi-adapter of a context and a request, which is very much like a view,
19:21
but then a third element to that adaptation is the actual view itself, which means that you can allow an object to be rendered differently depending on which kind of view it happens to be sitting inside. And that's pretty compelling for a use case like the one that we're looking at. Moreover, if we start to name our adapters,
19:42
that gives us kind of a fourth axis on which we can do this adaptation, and that allows us to make differences based on the content types of the objects that we're actually rendering within these collections of things on pages. So in code, it ends up looking a little bit like this. We start off with this idea of a generic content provider.
20:01
Most of the interface for it is really pretty straightforward, right? You have some adaptation up at the top that we're providing through Grok that lets us know what kinds of things are going to be bound to this. We give it a name. Here we're calling it the generic small block content provider. This is gonna provide one of those small chunks
20:20
with a little play button down at the bottom if it's appropriate for whatever content type happens to come along. These things are initialized much like browser views are simply by taking the context request in view and binding them onto the content provider object itself. And then there's an update method, much like a portlet has,
20:40
where all of the data is prepared. The update method is called when this new thing is about to be rendered. Then the final thing that happens at the end is that the template is rendered with all of these attributes having been set via the update method. So this is the sort of process by which a content provider is turned into a piece of HTML.
21:03
Once we've got one of these, we can make kind of specialized versions of them really quite easily. So here's one that's for shows as opposed to other kinds of content within a small block. And you can see that really all we've done here is make a small change in the name for our adapter
21:20
and make a change to the template that's gonna be rendered out as a result of that. So it's got all the same attributes to it. We can use the same information from the show that we do from other kinds of content, but we have a completely different template that we can use to render this out. So this provides the core of the system that we're gonna end up using to make these little blocks appear all over the website.
21:43
The manifestation of this in cover tiles then looks a little bit like this. We actually make a subclass of the stock, clone, collective.cover, persistent cover tile. So we've got a subclass of that that we're gonna call the content provider tile. And that's really just the exact same thing
22:03
as a normal cover tile except it has this extra attribute which is the content provider method. And what the content provider method is intended to do is to take whatever object is the context of our particular block of content here and adapt it to and render out
22:21
an actual content provider so that it turns it into HTML. So our tile templates themselves become very simple things. All of the tiles on our site share the same base template which says build the external wrapper for a tile and then call this content provider method
22:41
off of our view and render out the HTML that gets created by whatever happens to come out of that method. Now if you look back at the method down here at the very bottom you'll see that the method returns the result of calling a external function that's called content provider. And here's the source for that.
23:01
And really all this is is the exact same render chain that's in the core of Zope.content provider itself updated just a little bit to allow us to look up named content providers instead of just binding them to the context view and request. So we have here a situation where we format a name
23:21
in a particularly predictable way up at the top. Then we look through a list of available names. We build a list of available names and we iterate across them until we find one that returns an actual adapter for us. If we find none that have a specific name then we always end up falling back to the generic version for whatever tile type
23:42
we happen to be doing. So there's always a generic fallback if you haven't made some specialization of this particular tile. At the end you simply update the thing that you're doing and then you render out that provider and that returns HTML. That HTML flows back through the view down here
24:01
at the very bottom of our tile and is rendered out into the page as an HTML fragment on the page that's in the middle of being built. So within the tile system this allows us to put together a cover that looks like this. We've got this nice tiled layout where we have three tiles showing up. Each one of them is represented by one pass
24:23
through this content provider from within the template that we have set up. So there's a tile that contains a show. That show gets rendered as the tile that you're seeing there. We also wanna do the same system within browser views and so we provide a browser view
24:41
that within its set of methods has also a content provider method and it's really just the same thing. It's a very simple pass through to the result of calling this content provider function that we've defined outside. The templates then can call that content provider method from somewhere inside the template.
25:02
Here we have a list of featured items that are supposed to show up on a show home page. So there may be featured segments inside that show that they want to have show up there. That list is dynamically provided by the view and then each one of the items in that list is rendered by its own individual content provider.
25:22
And once again, it's just using that same method that we used before for the tiles. So the core of what makes this all work is exactly the same, both in the tiles context and in the context of custom views. And that allows us to do things like the music landing page where we have all these featured things
25:40
showing up at the top or the landing page for a particular music show where we have a section of featured tiles that are gonna show up within this, basically it's a static HTML page, just a plain template, but the pieces within it are generated dynamically using this same content provider approach. So the outcomes of building things in this way
26:02
meant that we were able to use the same content provider and that let us write all of our page templates once and use them across the site in many, many different contexts, inside portlets, inside tiles, inside browser views. We were able to really conserve the amount of template writing we did and when things got updated, there was really only one place
26:21
where you had to go to update the HTML. Also, having this idea of default names allowed us to make one content provider that would automatically work no matter what kind of thing you dropped into it and then specialize away from that generalized case and that was really quite helpful to us. Finally, we made a decision early on
26:41
that we were going to allow ourselves to adapt brains out of the catalog as context objects instead of requiring those things to be actual full-blown content objects. And this meant that when we were assembling lists of shows from within a, or a list of episodes from within a show, we didn't have to call get object a million times
27:02
on the results that were passed back to us. We didn't have to use up all that memory. We could just pull the brain and the metadata out of the ZOKE catalog and use that all by itself, which was really quite nice. There were some quirks, though. The design firm had used a custom grid system that they'd built themselves.
27:21
And within this grid system, I think probably due to the need for a responsive design, all of the rows within the grids shared the same markup. It was identical no matter what row you were in. The rows all looked the same, but cell markup differed depending on how many cells were supposed to show up within a row. And this idea of having cells responsible for knowing
27:43
how many other cells were in the same row with them is really from a dynamic, sort of programmatic building perspective, not ideal. We had to do some interesting folding in order to make that work. Luckily, Cover provides for us this idea of a customized grid layout engine that allowed us to do this.
28:02
It would have been nice to have input into that design decision. And I think down the road, if you're working with a design firm and they're making decisions like this, it's nice if you as a programmer or as the technical lead for a project can have some input in that process to say, the decisions you're making here are really gonna cause me trouble. Is there a way that we can come up
28:20
with a different decision that would be a little bit easier to work with? But given the tools that were available to us out of collective.cover, we were able to come up with a build row class method that has all of these special cases for the 750 million different ways that we need to lay out rows on a page. Yes, it's ugly code, but it does work.
28:43
So I'm willing to let go of the kind of hideousness of the code there in service of saying, yeah, we managed to get it done. There were some other features that I'd like to talk about involved in this site here. One of the decisions that Alec made as the tech lead on the project was to use adapters to provide consistent APIs
29:04
for shared behaviors. So there are a lot of adapters in this site that can adapt to shows, they can adapt to brains, they can adapt to episodes, they can adapt to segments, and they provide a uniform API across all those different content types. The benefit of this is that we can then expose those APIs via views
29:21
as JSON or XML or just as simple Python calls, and then we can use client-side JS plug-ins or code inside our views to consume the data that's produced by those APIs and actually show it in page templates. And that was really, really effective. It also allowed the client at the same time
29:40
to engage another firm to design a mobile app that's completely separate that would use some of those same APIs, which I think was really well done. And the ability to provide adapters and the ability to use adaptation as a programmatic style is a great strength of Plone. It's something that I think we need to think very carefully about continuing to keep around for ourselves.
30:01
An example of this might be the iPlayable interface, which is meant to show anything that could potentially be played via a play button. Here's the interface for it. It's got all kinds of attributes and all kinds of little methods on it. There's an abstract implementation of this that will actually take some of those methods
30:20
that could potentially be identical across all different content types and implement them. And then there are individual implementations for things like content brains or for the playable shows or other things like that. So these would be specific implementations of certain methods that would be better suited to the content type that they happen to be adapting.
30:43
And the ability to use that kind of thing let us go ahead and build JavaScript like this for the dynamic flyouts on the show that would grab that iPlayable adapter. It would make a call that would give back the links that led to downloadable, playable media
31:00
that were associated with the shows or the segments or the episodes and then allow people to download those. And on the front end, it manifests like this. You can click on the little download button and you'll get links to iTunes and to Amazon and to RDO and other services like that. And when you click on them, you get the download of that song from that episode
31:21
showing up right there in your browser. And all of that is done via adaptation and the careful consumption of well-designed APIs. Another feature is that the site has a JavaScript-based player for doing live or recorded audio. So you can actually listen to the shows on KCRW
31:40
as they air live. And one of the nice things about this player is that we can break it out into a standalone window. When it's viewed on mobile, if you close the app, the player can continue to play. You can then go back to the browser and stop it or you can just stop it by turning off the phone or something like that. And also, this player is actually persistent
32:00
across various page loads. So as you're interacting with the site, you can actually see the, let's see here. Here we are, good. So here we are on the KCRW live streams page. And if I scroll across here, I can play the stream from the Eclectic 24 broadcast in just a moment here.
32:23
We're listening live to KCRW's, well, actually it's not live. This is a pre-recorded stream of their audio, but this is KCRW's Eclectic 24 channel. It's very nice and exciting stuff for us to be able to do this.
32:40
And you can also then go and click on the little hamburger menu up here and you can jump to the news and culture page and you can evaluate what's on the news and culture page. And if you notice, the audio is still playing in the background. The track hasn't stopped. It hasn't jumped back to the beginning. It continues playing no matter where you go in this site.
33:00
So it's possible for you to continue navigating and just listen to the site. I think one of the fun aspects of this as a developer was that as time went along, we actually got to the point where we would, at least I, I don't know about the other developers, but I would turn on the radio while I was working on the site. And as I'm developing things, I would listen to the radio
33:20
and browse around the site and see the new features that I was building as I was continuing to listen to the feature that I built last week. And that's pretty cool. It was a nice feature for us, I think. So yeah, we've got this JavaScript player. It's persistent across page loads. A lot of that is done using history.js,
33:40
which is a plugin that's readily available for the JavaScript world. We did have to do a little bit of bending of it in order to make it fit into Plone. We have some extra helpers that make sure that we turn the links and all of the forms that show up in Plone into Ajax links so that they get fed, instead of submitting a page request they actually get fed through this Ajax browser
34:01
or Ajaxify script. The Ajaxify script then loads the page in the background, does all the preparations for everything. It does things like loading up Google Analytics, right, so that you actually get track of all of the clicks that are going on. You also get some information, extra JavaScripts being loaded up
34:20
so all of those dynamic features that work on individual pages are also part of things. But then in the end, you have this really nice history.js-based Ajaxified Plone website that allows you to keep that persistent player live and running no matter where you surf on the site and regardless of what you do. The final feature I wanna talk about
34:42
before I quit here today is the integration of Solr. We use this to provide sort of improved search results for the people. We use the alm.solr index product which I think is really terrific. You install it, it takes over the searchable text index for your site and it just works right out of the box. It's very simple and easy to get working with.
35:00
It's also customizable so that you can do things like add weight or add ordering or change around the way that you want your search results to come back and that allowed over time our clients to give us information that would allow us to tune the search better for them. In addition, we're working on a feature right now that'll allow us to index content
35:21
from external WordPress blogs so that all of the blogs that the different show hosts maintain on WordPress will actually be indexed inside the Plone site. Those search results will show up in the Plone site but when you click on them, they'll go out to the WordPress blog. And one of the benefits of the content provider approach that we used is that we can then take
35:42
these different external search results and have them show up in our search listings in a way that makes it clear that they're different from some of the other things that are going on without having to write a lot of extra code. It was really quite simple for us to adapt those search results to the existing stuff that we have.
36:01
We do that indexing using a celery integration with collective recipes celery I think it is. So we've set up like these zoop tasks that gather up the set of blogs that exist within the website that we're gonna index and then that one task that fires off on a daily basis will gather up all of the blogs that exist
36:20
and fire off individual tasks for each one of those blogs. So each one of the individual blogs is indexed asynchronously from all the other ones and asynchronously from the site itself. So that doesn't occupy any of your zoop threads getting all of this information in. And once you've done that, we have this cute little content object. It's not even a real content object.
36:41
It never ends up in the graph but it provides a hook so that when we do indexing, the information about those blog posts ends up in the catalog without having to occupy content space with things that we don't really want. And then once we've got that, we have a nice little content provider adapter that works for the results that we're getting back
37:01
from the catalog for these particular blog posts and that allows us to show the blog posts. I wanted to show you a list of this but this is still on our staging server is not yet live and so I didn't get a screenshot of it running but it is out there and it's working and as soon as they pull the trigger, we'll distribute it out to their live site and it'll be ready to go.
37:21
Before I finish up, I wanna give some thanks to a bunch of people who were involved in this project. First off to the client KCRW who trusted us with this job. It was a huge job. There was a lot of complexity involved in it and it was really a massive undertaking but it was a heck of a lot of fun to work on and I really enjoyed the work and I think the end results of it speak very strongly
37:41
for Plone and what Plone can do for a modern website that really fulfills all the needs of the modern web world. I'd also like to thank Alec Mitchell who was the technical lead on the project. His design decisions were just tremendous and he made some really, really, really great choices and was a lot of fun to work with. Both he and Carlos de la Guardia
38:01
who was another one of the developers who worked with us were really tremendous and a great team to be part of. I'd also like to thank Hard Candy Shell for the feature rich design that they provided us with. It was really an interesting experience working with a design firm that was so well established and really well known.
38:20
It's a tremendous opportunity to get to try and turn something as interesting as their designs and their JavaScript interactions into a live Plone site and I'm really pleased with the results. I'd also like to personally thank the creators and the maintainers of all the add-ons that we used, both those ones that were named specifically within this presentation and the other ones
38:40
I didn't have time to go into. Without all of you people and the work that you do, the kinds of sites that we build for our customers would just not be possible and it's marvelous that you all are out there. And finally, I'd like to thank the Plone Conference itself for the chance to come up here and show it all off to you. Last, I'd like to thank you for coming here and listening to this long and rambling speech
39:01
about this wonderful upgrade and if you have any questions, I'd love to take them now. And I'm looking out and everybody's sitting very still. There's a question over there, thank you, Jean.
39:26
One of the microphones will turn on, I am certain. If it doesn't work, I'll repeat.
39:55
I think that's actually not a bad idea. The question is, is there a way that we could maybe create a training course
40:01
or some other materials based on some of these techniques? And yeah, I think putting together a technical blog post that would cover some of these techniques and give a little bit more in-depth look at them rather than the quick scan across the code like this is something that we should do, I agree. So Sally, remind me to do that the next time I have some time.
40:23
It's a great idea, Jean, thank you. Any other questions about anything that we saw here? Yeah, I looked at the two of them
40:40
and Collective.SolarIndex really is a lot more holistic in terms of the way that it takes over the cataloging responsibilities, whereas AlmSolarIndex is really just the searchable text index. And for our particular use case, that really made sense. It was something that we didn't want to be
41:00
overly aggressive about what we took over. And also, I think as a second reason is that I've used AlmSolarIndex a number of times on a number of different projects and I'm really comfortable with it. And as I looked at Collective.SolarIndex, one of the things I really liked about Collective.SolarIndex was the use of adaptation inside it to allow you to do different things
41:22
with the results that you were getting back. I thought there's a lot under that product that I'd like to look at more intensely over time. But for this particular project, I think just replacing that one index was really all we needed and it provided us with the kind of mutability that we wanted in terms of bending the search results and it had the familiarity that let us move quickly.
41:47
Oh, and the question was why we used AlmSolarIndex instead of Collective.SolarIndex, which is another product available for the same sort of purpose. Sorry, I never remember to repeat the questions.
42:03
All right, well, thank you all very much for attending. I appreciate your time today and go listen to kcrw.org online. It's a lot of fun. Or kcrw.com, I guess it is.