We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Plone Conf 2020 - day 4:Lightning talks

00:00

Formal Metadata

Title
Plone Conf 2020 - day 4:Lightning talks
Title of Series
Number of Parts
72
Author
Contributors
License
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language
Production Year2020

Content Metadata

Subject Area
Genre
Abstract
Lightning talks, recorded live, from: Philip Bauer - Theming for small projects, Matt Hamilton - deploying on a S390 mainframe, Manabu Terada - Volto site for Osaka university, Japan, Sean Kelly - Semantic Plone (and speaking REALLY fast...), Philip Bauer - trainings for Plone, Sven Strack - Make Content Matter (Plone & related documentation), Carlos de la Guardia - Questions Form Library example, Mikko Ohtamaa - Hiring world class remote developers efficiently from StackOverflow.
Zoom lensMultiplication signProjective planeComputer virusMeeting/Interview
Zoom lensMathematicsDefault (computer science)Fiber bundleElement (mathematics)Interface (computing)Product (business)WebsitePlane (geometry)Template (C++)Type theoryPresentation of a groupProjective planeMusical ensembleWave packetDefault (computer science)Computer fileCloningFiber bundleRule of inferenceCross-site scriptingCodeMathematicsDivision (mathematics)Differenz <Mathematik>Computer animationMeeting/InterviewXML
Maxima and minimaDrum memoryInformation securityComputer iconFluid staticsProduct (business)MaizeInclusion mapRule of inferenceZoom lensPrice indexRule of inferenceSubject indexingComputer fileText editorComputer iconCloningWhiteboardComputer animation
Convex hullComputer fileFunction (mathematics)Clique-widthRow (database)MathematicsWebsiteDefault (computer science)Computer iconSubject indexingRule of inferencePortletCloningEmail
Differenz <Mathematik>Revision controlZoom lensRevision controlMathematicsDivision (mathematics)Sound effectProduct (business)CloningPersonal identification numberDifferenz <Mathematik>MereologyMultiplication signComputer animation
Zoom lensPredictionRevision controlSoftware developerPlane (geometry)OvalCuboidComputer programmingConfiguration spaceWebsiteServer (computing)Right angleMultiplication signDifferent (Kate Ryan album)Time zoneInstance (computer science)TouchscreenBoom (sailing)CloningQuicksortBitWeb browserMeeting/InterviewComputer animationSource code
Software developerSeries (mathematics)CoprocessorBitMainframe computerIBM MainframeCloningComputer architectureComputer animation
Power (physics)Universe (mathematics)Meeting/Interview
WebsiteContent management systemIntranetRevision controlWebsiteRevision controlUniverse (mathematics)Projective planeCloningInternetworkingComputer animation
MaizeWebsiteContent (media)Renewal theorySmartphoneReal numberWeb portalWebsiteCloningFormal languageWeb browserGraph (mathematics)Resolvent formalismContent (media)Web pageSmartphoneTwitterComputing platformComputer animation
AlgebraWebsitePoint (geometry)Demo (music)Error messageContent management systemTask (computing)Cartesian coordinate systemDemo (music)Projective planePoint (geometry)Content management systemComputer configurationUniverse (mathematics)Error messageImplementationWebsiteGoodness of fitWeb portalSmartphoneWeb pageCanadian Mathematical SocietySingle-precision floating-point formatCloningComputer animation
Zoom lensPower (physics)Presentation of a groupPhysicalismLevel (video gaming)TouchscreenInheritance (object-oriented programming)Meeting/Interview
Zoom lensHacker (term)Content (media)Mobile appWindows RegistryObservational studyDatabaseSoftware repositoryTape drivePrice indexPredicate (grammar)Communications protocolPrincipal idealData typeElectronic data processingTerm (mathematics)Duality (mathematics)Type theoryDigital filterField (computer science)VolumenvisualisierungHost Identity ProtocolLibrary catalogView (database)Plane (geometry)Web portalAerodynamicsSource codeCodierung <Programmierung>File formatUniform resource locatorFactory (trading post)Interface (computing)Context awarenessStatement (computer science)Object (grammar)System callKeyboard shortcutExecution unitInstance (computer science)Military operationFinitary relationSemantic WebSlide ruleMultiplication signCloningContent (media)IdentifiabilityOrder (biology)Object (grammar)Uniformer RaumOpen sourceTerm (mathematics)NeuroinformatikEntire functionStatement (computer science)Cartesian coordinate systemSoftwareDescriptive statisticsStandard deviationHierarchySelf-organizationAttribute grammarSystem identificationGraph (mathematics)MereologyFile formatObservational studyCommunications protocolAlgebraGodDifferent (Kate Ryan album)Core dumpPredicate (grammar)Interface (computing)InformationKeyboard shortcutInstance (computer science)Source codeSocial classLibrary catalogAuthorizationCASE <Informatik>Process (computing)RootLevel (video gaming)Field (computer science)Factory (trading post)Uniform resource locatorWeb portalType theoryProgrammer (hardware)Latent heatWebsiteRight angleOperator (mathematics)Theory of relativitySheaf (mathematics)Computer programmingComputer animationProgram flowchartJSONXMLUML
Inheritance (object-oriented programming)FlagBoolean algebraType theoryZoom lensTwitterSource codeBitType theoryFunction (mathematics)View (database)InformationClassical physicsBoolean algebraWave packetMereologyCodeSoftware testingBlock (periodic table)Set (mathematics)Representational state transferText editorSoftware developerVideoconferencingPerturbation theoryMultilaterationRevision controlMultiplication signWeb pageSlide ruleCloningYouTubeContent (media)WebsiteData storage deviceFront and back endsGroup actionReduction of orderRouter (computing)PlanningProjective planeEmailVotingElectronic mailing listComputer fileCoefficient of determinationField (computer science)PlotterConnectivity (graph theory)Computer animationMeeting/Interview
Zoom lensWave packetMeeting/Interview
Zoom lensContent (media)Markup languageState of matterData structureInformationBuildingRight angleElectronic program guideLink (knot theory)CodeFocus (optics)Context awarenessDressing (medical)Computer-generated imageryMathematicsSet (mathematics)Musical ensembleInformation securityElectronic program guideBootstrap aggregatingProduct (business)CloningFile archiverProjective planeMarkup languageRight angleMathematicsSinc functionMultiplication signExtension (kinesiology)Data structureLink (knot theory)Focus (optics)Zoom lensWave packetTheory of relativityContent (media)Web pageTelecommunicationState of matterBlock (periodic table)Tablet computerSoftware testing2 (number)Different (Kate Ryan album)CASE <Informatik>VoltmeterBasis <Mathematik>Coefficient of determinationCodeData conversionString (computer science)Magnetic stripe cardFreewareDisk read-and-write headText editorPhysical lawHydraulic jumpMeeting/InterviewComputer animation
Line (geometry)GUI widgetForm (programming)Library (computing)CodeArchaeological field surveyLibrary (computing)Presentation of a groupComputer fileForm (programming)Zoom lensElectric generatorComputer animation
Computing platformDisintegrationDifferent (Kate Ryan album)Correlation and dependencePower (physics)Electric generatorLibrary (computing)Form (programming)Archaeological field surveyContent (media)WebsiteFreewareWeb pageCodeText editorOpen sourceLine (geometry)Computer animation
Zoom lensForm (programming)Web pageCartesian coordinate systemNeuroinformatikMusical ensembleInstance (computer science)Computer fileSocial classoutput
DialectFamilyExecution unitInternet service providerInformationProgrammable read-only memoryMathematicsComputer wormType theoryFunctional (mathematics)Form (programming)CASE <Informatik>Line (geometry)GUI widgetLibrary (computing)Web pageInformationDifferent (Kate Ryan album)Computer animation
Zoom lensSoftware developerSoftwareSoftware testingAeroelasticityObservational studyStatisticsVideo game consoleStack (abstract data type)WebsiteBuffer overflowCASE <Informatik>Cartesian coordinate systemGame theorySinc functionService (economics)Data managementSoftware developerObservational studySelf-organizationMoment (mathematics)Computing platformMeeting/InterviewSource codeComputer animation
Software developerVector potentialZoom lensAverageDecision theoryService (economics)Digital filterData managementCodeSpreadsheetBuffer overflowCartesian coordinate systemStack (abstract data type)Decision theoryGoodness of fitPosition operatorSoftware developerTraffic reportingProcess (computing)Service (economics)SpreadsheetForm (programming)GoogolCodeMultiplication signObject (grammar)View (database)
Execution unitMoment of inertiaNormed vector spaceCapability Maturity Model IntegrationTask (computing)Physical systemGreen's functionLatent heatMetric systemSpreadsheetProcess (computing)Human migrationArithmetic meanGoodness of fitCodeDomain nameTask (computing)Position operatorFront and back endsForm (programming)NumberSoftware testingTime zoneOpen sourceProjective planeMultiplication signComputer animationTable
Process (computing)Office suiteZoom lensNumberSelf-organizationOffice suiteProcess (computing)Service (economics)Data managementGoodness of fitComputer animation
Zoom lensControl flowMeeting/Interview
Transcript: English(auto-generated)
Cool. Okay, guys and girls all over the world. Now it's time for lightning talks. The first talk is by Philip Bauer, Teaming for Small Projects. Go on Philip.
Philip, hello. Mr. Philip Bauer. Yeah, my mouse refused to work and now it's working, sorry. Okay. Okay, so yeah, I have two presentations
so I thought I was starting with the other one. So, Teaming for Small Projects. My approach to small, I have a lot of small projects and my approach is the following to, as same as we do in Plone, when we develop code, we use and adapt everything that's there,
that is the default theme. We change only what we need to change and we keep our changes in a package to get a git diff. So that means where I have often only one CSS file and one JavaScript file that are registered in a bundle. This example is from the master in Plone training.
Everything visually visible and that has to change is overridden with setcv.jbot, for example, the news item. But what about the diazo rules? There is no way or there was no way to override static resources and a diazo rule is a static resource. This was fixed this year by Malte Bochs.
Thank you very, very much. I've been waiting for this for quite a while. And now you can use to override, use setcv.jbot to override the rules XML from Barcelona. So create a Plone theme, Barcelona theme rules XML, for example. So these are four examples of files that you can override, the index HTML, the rules XML,
the Barcelona Fava icon. This is the only way you can actually override the Fava icon in Plone without creating a custom diazo theme. And also last but not least, the TinyMC styles CSS file, which is otherwise also unoverrideable,
which gives you custom styles in your TinyMC editor. Here's one example. This is a Plone site and there are only two changes or three changes, including the Fava icon that I made. One is in the index HTML where I moved the navigation into the header, so it's next to the logo
and has a shorter width, so it's not full width. And the second change is that the row width, you see on the left side, you have the portlets, they are wider than by default. This is a change in the rules XML file where the row width, the column width is defined.
This is only, this is a two file Jbot override. So it has a small, these are small changes. They give you a small div. It is very easy to upgrade to a new Plone version and has a great effect for my productivity. It is included in Plone 5.2.3
and you can use it in all the Plone versions, just pin z3c-jbot-1.1.0 and it works in all versions of Plone and Python that are still supported and that I know of. Thank you. Congratulations on time, my friends.
Excellent. So the next lightning talk is by Matt Hamilton. Welcome back to Plone Conference, Matt.
There we go, found the unmute. Okay, can you see me? Hopefully, hello, Plone people. Yes, I see you. Brilliant, great. Okay, long time no see. Hello, so I'm gonna do a talk today about a thing called Docker, right? So maybe about 15 years ago, some guy called Nate Orney
at the top of a mountain in Austria was talking to me about some thing called containers and Docker and kind of, yeah, yeah, yeah. And there's some guy Sven who keeps going on about it and some other things, more container stuff or whatever. So, you know, I thought, okay, well, I'll have a go. And apparently you can use Docker now to install Plone.
Right, so, okay, let's have a go here. I've got a Linux server set up here. And for some reason, it's gonna show me that it's sharing the screen up above that. But anyway, let's log on here to this SSH, this Linux server. And let's try here.
So apparently I can run sudo docker run dash p 8080 being none of the build out stuff. This is all magic, right? Plone FG, and it's gonna go and do some magic stuff. Okay, folks, so it's downloading a bunch of stuff here and bringing it down and layers and all sorts.
I'm gonna run this all automagically. And there we go, I think it's got it all. So with a bit of luck, it's gonna start up Plone. We're gonna see the usual kind of stuff. Here we go, yeah, it's all coming by here. So I think we've got a Plone instance running. So let's have a look and see.
If I bring up a new tab here, localhost 8080. Ah, boom, there we go. It's running and create a new Plone site and ta-da create Plone site. And it's gonna churn a little bit and da-da-da. Ta-da, Plone through Docker, right?
Great, isn't this brilliant? But yeah, so what, Matt, you're 15 years behind and you know, okay, great, we can do this and there's better things now and blah, blah, blah, blah. Okay, yeah, yeah, yeah, yeah. I'll get with the program. So anyway, let's just have a quick, just kind of a quick look around here. I remember this thing called the control panel
from way back when. So let's have a look in here, control panel configuration. Let's look through here. Here's all the eggs. Can anybody notice anything different here? I don't know if I can hear you,
but shout it out if you can. Look here, does anybody notice anything that's maybe a little bit different than what you've seen before? Yeah, I'll give you a clue. S390, so I've just run this and this is not on your average x86 box,
your Dell or whatever server. You know what this is running on? One of these, an IBM Z series mainframe. This is an S390 architecture, IBM Z14, Linux 1.3 mainframe. This one is the one it's actually running on over at Marist College in the US.
This thing has like 170 processors and gazillions of everything of RAM. And there we go. Isn't that magic? Actually, I just did exactly the same as what you can do on x86. And I've just installed, set up and run clone, including all this Python stuff,
it's C compile bits and everything all on a completely different architecture, actually running live on an IBM mainframe. So there we go and thanks very much, hope that was fun. Thank you. That was really good, clone on a mainframe. Now we are going to have Manabu Terada.
Chrissy, can you do the magic? Yes. Hello, everyone. I'm Manabu Terada from Chiba, Japan.
I'm fine, but I could not go overseas in this year. Our situation is the same as in the world. So we are working from home with our co-workers online. Okay, I will talk about making both sites
in Japanese university. Let's start. Introduce University of Osaka and with Brian Lee. The University of Osaka is a large university in Japan.
The university is using clone for its official site, its internet site, and various other sites since 2009. The first version related to our project is clone 3.0.
This is official site using clone 4. This is internet portal site using clone 4.
Next, introduce Risso. Risso is one of the sites that we worked on used for introduce official research releases. The site has introduced over 1,200 releases
since 2012. The site is available in Japanese and English. Almost all content can read in both languages.
This is all site for Risso using clone 4. Risso was improved by Volto and clone 5.2. We delivered it last week.
The design concept uses the new technology and is a great user experience on a smartphone. This is new site for smartphone, top page.
This is for PC browser. This graph is a trend history. This is content page, okay? This is smartphone site. This is top page in English related to Japanese.
It's okay. We decided to use clone at the last clone conference. We explained the good point of Volto and SBA means smartphone, sorry,
single page application site to our customer. We made some demo projects and some challenging implementations. We encountered some errors and confusing issues
and the project was postponed for about half a year. But we related to support with demo and victor. Thank you very much. They are certified with the delivery of the SBA site.
We are happy with our experience using Volto. So we know CMS and SBA are very difficult, but we are successful. Next, we will make the official site,
official OSFNA sites with Volto. It's a very, very tough task. Please wait for our update next year. Thank you for hearing my presentation.
See you next conference at physical. Thank you Manabu. Now we also have someone from our best. I'm glad to invite Sean Kelly to the stage.
All right, thank you very much everyone. Can you hear me? Okay, yes, not all. Yes, we can. Okay, super. All right, let me share my screen here and we'll get started. All right, desktop one. Well, thanks for having back everyone. This is a quick talk about using semantic web technologies inside of Plone.
So semantic Plone, curating content with RDF. We're gonna do about 50 slides in five minutes. So no time to waste. Here's how the talk is going to go. Hopefully this goes faster than it takes to summarize Proust. So who's actually doing this? Well, the early detection research network, EDRN, develops ways to discover cancer as soon as possible. And they do so by using bioinformatics which is just a fancy way of saying computer stuff
or biological research. What is the problem? Well, it's this. We've got a bunch of different applications running right now. And a portal that uses, well, you know what? And all of these applications use different technologies to implement and they're kind of non-interoperable. So we started using RDF to exchange information between them. What is RDF? It's the resource description format
which is a semantic web standard. It lets you make statements about, well, anything. Here's a statement, for example. And here's that same statement that we've charted out. It has a subject, a predicate and an object but this is imprecise. So instead we use uniform resource identifiers in order to say in terms that computers can understand what we mean by a book or by a creator and what that is. These are triples, triples of subject, predicate and object.
Those are URIs, objects can be liberals or they can be other URIs which allows us to make an entire network of knowledge and information, a graph of knowledge, if you will, that if we were to serialize part of this at least in XML, there are other formats that you can serialize RDF into, it would look something like this. So let's use the RDF graph
inside of the early detection research network. That is we'll send all of the information about biomarkers and publications and studies and protocols into the portal using RDF. So here's what the actual RDF for the research network looks like. We've got, well, it's pretty scientific but we'll start right here. The object identification, the subject URI
means that we'll need an identifier of some kind for our particular objects in the portal. So what we do is we make an identifier attribute for a knowledge object. And of course we'll need to catalog this and we'll be able to look it up later. And from that, we can make an entire class hierarchy of custom content types that all derive from the knowledge object.
Plus we'll have a container knowledge object or knowledge folder for that, which we'll see later. So now let's look at the various attributes here. We've got lots of predicates for things like the organ, the principal investigator and so forth. Some of those are literal values and some of those are references and we have the RDF type. So we need to map the RDF type onto a specific dexterity custom content type
such as the data collection here. And then the Zoke schema fields of that content type are the various predicates. So how do we go from the RDF predicate to a schema field? Well, the secret is to use tagged values a little known feature of Zoke.interface. This allows us to associate basically any kind of program or specific information that we want such as the predicate map.
How do we map from a RDF predicate to the name of a particular field in the dexterity interface class plus what type we'll have to use and factory type information. Now we can do what's called the RDF ingest. What is the RDF ingest? Well, basically that lets us visit this specific URL which we do from a cron job, which calls on the root field an RDF ingestor
whose job is basically to say, hey, portal catalog, show me all the knowledge folders adapt them to an ingestor and tell them to ingest themselves. A knowledge folder contains knowledge objects and it has as an attribute to all of the RDF data sources that it needs to pull from in order to populate itself with content. So how do we implement the ingest method?
Well, this is where another feature comes into play, introspection. I not often use feature of clone.dexterity unless you happen to be an author of clone.dexterity in which case we can use the knowledge folder adapter for the ingest to implement this ingestor class. Now, hang on to your hats. There's a lot of stuff going on here.
Too fast. Basically what happens is we have, oh, I'm losing my place now. Oh my God. Oh no, and I'm running out of time. How much time do I have left? So, you know what? I'm gonna quickly skip over that next slide and we'll go right on to the next one here. We have set algebra, which allows us to determine what are the new objects, the dead objects, the objects that we need to dump.
From that, we can take the lots of get tag value calls to figure out what is the interface of the dexterity thing that we're doing, create a new instance of that object in the portal and then set each of the values in there. Now here comes the introspection. What we do is we call on the interface itself, get with the name of a particular field we want,
bind that to the new object that we just created. I'm gonna skip the as reference part. Let's drop down to the non-reference sections here. And here, if it's multivalued, we can validate all the values that we've been given in the RDF and then set those values. Oh, if it's a single value, we just use the zeroth item. As for update objects, we don't have time for that. You can check out the source and see that.
Why do we do the update separately from the creates? Well, creation is a bit more of an expensive operation, but more importantly, we don't want to break relations from the existing objects that we've already established. Does it work? Well, as a matter of fact, it does. This is what the site looks like and over 5,000 objects are curated within it every 24 hours. You wanna take a look at the source, you can find it here. Here's some of the future work that we're gonna do on it. And if you have any questions here,
you can reach me. Thank you very much for having me back. Thank you, Sean. Now, Philip Bauer, Philip. Yes, I'm here. I'm a bit speechless from that talk.
Impressive. Okay, quick talk about training for Plone 6. So how to get yourself ready for Plone 6. At the Plone Conference in 2020, this year, we had over 120 training attendees and three new trainings. We had a training by David Bain called Getting Started With Your Plone Site.
There is a four hour video on YouTube. It will be hopefully linked at somehow on that page and yeah, in the conference site. It has slides that were updated for that. There was a brand new training by Tiberiu. Thank you very, very much for that.
Two times four hours. You can enjoy the video online and read the excellent written documentation on TrainingPlone.org. Everything you need to know, the newest Volto add-ons, hot shit. New, yes. Mastering Plone 6 is mostly rewritten by me and Katja and Janina and a lot of other people who helped.
There are two four hour videos that I had to do alone since Katja was sadly sick. The docs are also on TrainingPlone.org. There are docs for mastering Plone 6 and mastering Plone 5 side by side now. There was an updated version of the React and Volto training. Thank you very much Alok and Jakob who did these.
Also the videos are online and the docs online are also updated. There was a training by Steve Piercy. I couldn't get him to answer my question about the written documentation before this lightning talk, but maybe I'll hand it in later. Here are some highlights from the mastering Plone 6 training that we gave.
It teaches you Volto views for custom content types and listings. It teaches you to override parts of Volto. None of that is new if you already did any Volto training but still it has a full dexterity reference for all built-in dexterity types including some community add-on types fields.
It even has screenshots for Plone Classic and for Volto for all of these files. It has a chapter about custom control panels that work in Volto and in Classic with the same code. So you can make vocabularies from these control panels that you can use in your schema. It has a chapter on upgrade steps
that's actually pretty old, but it still works fine. It has a chapter on extending the PastaNaga editor. It has a chapter on custom blocks. There's a nice Q&A block you can steal from that. And I'm very proud of that. There is a very new set of chapters that are not entirely done about voting,
reviewers for a conference vote on submitted talks. And because there is a API and storage layer in the backend for these votes, there's also a viewlet in the old training. Then there is a custom REST API endpoint that exposes this API.
And then there are React components, router action and reducer to actually consume this information. Use these trainings as reference, as documentation. I use them every day to copy and paste into my projects. Please use it to train your coworkers. And if anything doesn't work well,
tell us, write a pull request or a ticket or send us an email. There are some future trainings that are in the planning. We want a deploying Volto training. Everyone's waiting for that. I'm looking at you, whoever thinks I'm looking at him or her.
Theming clone classic. Yes, you know what I'm talking about. And maybe also a developer quick start. We were talking and planning about that, but we haven't finished that yet. There are a couple of hidden gems that you probably don't know about. Testing clone, excellent training by Andrei Ekeci that he did for the last conference and a excellent trade.
No, yeah, I think for the last conference and a clone workflow training by Kim Nguyen. And find out about a lot of other trainings on training.clone.org. Thank you very much. Thank you, Philip. And now we have Sven talking about documentation.
Yes, let me try to figure out how that works. How I can share. Okay.
Okay, security settings I can't share. Annoying thing, I hate it. Anyway, this is like an update about, I'm sharing now or not?
No, you're not. Now? Now you are. Okay, this is basically like an update from my talk last year at the conference about like state of the docs. So it's about clone related documentation. And it's like state of the docs, then we turn and basically what was happened the last year
because like the docs are not dead even if it seems like they are dead. And it's awesome that we have training docs but training docs are like a completely different concept than like documentation. So let's start with the state of the docs.
So there was not much happening as in like adding more content or updating content but this is not only like a fault of the docs teams is also like on all of you because the docs since years are living also from your contributions. And if you're not like adding docs then there will be not much docs. But we basically did was we took the time
to getting our base rights. And this is basically because of changes to plume fix, Voltorr and some other changes for guillotina and things like that. So we took the time or we thought that it's like the perfect time to straighten out old mistakes and really getting back to basis.
What does that mean? That means we have now like extensive style guides for like content, like editorial style guides and running style guides and style guides about markup languages which is including like markdown which is 100% done and RST which is 90% done.
We have worked on a completely new content structure for docs to a clone or taking a mind like Voltorr and also the awesome bootstrap five theming things. The old build and deploy infrastructure I was talking about that last year is now done and it's working. It's already like running in production
not for clone but for some other projects and lots of other small tiny things. So let's jump to content and markup guides. As I said, we have now contents, archive editorial guides which is like from accessibility, over warding, be friendly, diversity, all that stuff. Then we have a completely 100% written markdown guide
how you should use markdown completely also compliant with like common mark. We have the same for a restructure text but this is not 100% done. They are basically missing the last 10% to make it like alignment with the markdown guides. This is like a screenshot of how the entry page
of the editorial style guide will look like. So we are focusing like on accessibility, bias-free communication and lots of other best practices using stuff, what is Google using and Microsoft Stripe and lots of other awesome projects. This is a screenshot of about like
the markdown style guides. We still have to update the theme that is like the same theme like for the editorial style guide. And you can see like guidelines for headings, block codes, comments, name conversations, strings, tablets for lots of stuff and there's even coming more. For a markup like Volto, Alta, Aca, Plon6 as told last year will be in markdown
and the OT will be markdown and restructure text. Focus on audiences, we do like user case driven documentation, flatter structure and remove old stuff and link more to training, make shorter examples.
And we use now something like 12 different linters with around 160 different tests. And they are blazing fast. So for a test run with Docstop Plon or took something like 10 seconds to test and deploy. And that was it. Thank you, Sven.
And now we are going to have Carlos de la Guardia. Yeah, where's my zoom thing? Yeah, and Sven lost the zoom. It's okay, I'll take it over. Hi, this is Carlos de la Guardia.
I just gave a presentation about questions which is my form library for Python. And I was not able to show one of my examples because I put the files in the wrong place. And I wanted to show it because I think it's a nice one. So if you don't know questions,
it's a Python library for form generation. It's on PyPy and it's on GitHub. You can take a look if you want. And the thing that I want to show is how to create a form by importing JSON schemas.
So my form library is based on RubyJS, which is a very powerful JavaScript library for Ruby and form generation. And RubyJS offers this free creator tool. It's free to use, not open source, but you can use it freely on your site. And here we have a very complex form
that was created using this creator tool. You can see it has lots of questions. It's a multi-page form with 10 pages of medical questions that you can edit and change to your heart's content until you have everything that you want
behaving the way you want. And after that, you can go to the JSON editor and all the JSON for the whole form, the whole 10 pages is generated, like 1500 lines of JSON code.
And my library can take this JSON and generate the form automatically for me. So I have the JSON file here in my computer. Here it is, it's a long file like I showed you before. And using questions,
so using more questions, I can just input form and then read the JSON file into a variable here. And then I just call the from JSON method of my form.
It's a class method that allows you to create that class out of that. Then that class, we use it here to generate an actual instance of the form. Here it is.
And that's it. You get the whole 10 pages form and it works. I can send the data of the completed form to Python. Here is the flask application that I made and here's the form working live.
So you can see, I already mentioned is very complex. There are lots of questions. You can go back and forth through all the pages of the form. There are different interactive ways of adding information
on the widgets that you usually see in the form libraries plus you get a super cool dynamic functionality that is very easy to add. In this case, we can add
medical conditions and it's very easy to add more roles. I can do the same here.
Go from page to page and there are fairly complex questions. It's a very long form, very different types of questions.
It's very easy to create functionality for, for example, showing a question, just in case I say just here, I get follow-up questions here. The same down here. The whole 1,500 lines of JSON generate
this long, that's it. This is what the form library does and I hope you find this interesting. Thank you very much.
Thank you, Carlos. And now another welcome back, Mr. Mikko Otama. Hello everybody. Welcome to the last lighting talk to this decade. I hope that you have had a excellent decade
and it has got it and it deserves and I hope that this decade really ends really fast. And I'm the last person standing between the end of this decade and a glass of wine for you all, everybody. So let's go fast. At the moment, I'm working for a company called First Blood.
It's esports platform. So if you have kids and those kids play too much PC and console games, they can go to our website and earn money when they are playing those games. And I'm talking about hiring and I am especially talking about hiring remote developers through a service called Stack Overflow. This year, I have processed 2,000 applications
and hired 16 devs because even if the company is three years old, there was a mishap, a human resource management problem and we lost 50 devs and CTO in one day and since then, I have been just rebuilding the organization. And recently, our hiring method was featured
on a Stack Overflow case study. So you can find it on the Stack Overflow website and you can also find posts on my LinkedIn. And what kind of devs you want to hire? So basically, there are two axes. There's a skill and there's a salary and you want to find somebody who is asking less money for more skill.
And this all comes down. You can always hire very expensive, very good devs but in the end, if you are a business owner, you are just going to waste money and you want to find something who is really good but not asking too much money. And because of the pandemics, everyone is remote now.
So it does make sense to hire remotely, hire somebody who you don't know because there are so many devs out in the world who are very good and you don't know about them. Stack Overflow talent service costs 2,000 USD for a half year
and you can advertise there and it's the largest software development market in the world. So it's top 50 size in Alexa. And when you have REITs, you have good markets. You get a lot of applications. With a lot of applications, you get good data and with good data, you can make obviously good hiring decisions.
And here's an example that average salary of the people who were applying were 2,500, but who got the interview was 600 USD less. And there were even people who were asking for the same position between 1,000 USD per month
to 30,000 USD per month. And of course, if somebody's asking 30,000 USD per month, he's not going to be hired. And the thing with the Stack Overflow that it's super, super, super popular, as I said. For each position, we get hundreds of applications
all over the world. And you can see here, a funnel that we start with 5,000 views, 700 clicks, 200 applications, five interviews, and one hired. And hiring person is half a person that's same as for New York Times, if you are a reporter. And because we get so many applications,
the process must support it. So we have a form, it goes to a Google spreadsheet. I saw soon we will sort out all the candidates. For the best candidates, we will invite to coding exercise, then we interview and then we make a decision based on who are we going to hire. And when you are hiring remotely, here are some four more important questions
you need to ask. One is your working hours, so that you have overlap in the time zone. Then you need to ask the salary upfront because people are going to assume for the same position based on something between 1,000 USD per month to 20K USD per month. And then the number of the years in the industry and what are your domain specific skills.
So you want to know if you are hiring for Angular, you need to know if they are hiring for Angular, if they do NestJS and so on, so on. And based on that, we get the nice spreadsheet of candidates and here are some metrics. So I use a traffic light system. Green means it's good, red means it's bad.
And here are some criteria that I use to call out the candidates that are not good. For example, if they don't know NILUX, they are not going to be hired for us because our dev system is based on Linux and MacOS. And after sorting out the candidates,
we will come down to, let's say 20 out of 500, they are invited to GitHub exercise and it's pretty much like a normal PR you would do for open source projects. If there's a task, you need to add a form, you need to add a migrations, you need to add a backend, blah, blah, blah. And they will make a pull request and they are touched by how good that pull request is.
Is the comments good? Are the tests passing? Are the tests, can I read the code and so on. And as I said, we lost some people at the start of the year. So we are back to 50 people at the end of this year. We have devs working from 22 different countries at the moment and good best practices
to work with such a number of people is that you have mandatory office hours. So everybody knows when this person will be in Slack. You have a service called GeekBot in Slack where you report every day that what you have done so management can follow. You need to have a dedicated human resource person because it is a pain to manage so much people.
And you need to have a good onboarding process how you get these people to your fully remote working organization. And this is everything this decade. Now we can go for the wine. And if you like wine, please follow me and we will. Thank you and see you in the next decade.
Okay, thank you, Mikko. But we are not going for the wine yet, right, Chrissy? Yes, we are going to do closing remarks but I need to stop the stream and restart it so that it's easier for Mary Beth to split out later. So there will be a very short break. Thank you.