Infrastructure Review
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 275 | |
Autor | ||
Lizenz | CC-Namensnennung 4.0 International: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/52073 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | ||
Genre | ||
Abstract |
| |
Schlagwörter |
00:00
Chi-Quadrat-VerteilungQuellcodeDoS-AttackeDruckspannungRechenschieberStreaming <Kommunikationstechnik>MultiplikationsoperatorMetropolitan area networkBitZweiYouTubeMetrisches SystemNichtlinearer OperatorSchnittmengeVideokonferenzEndliche ModelltheorieRFIDFestplattenrekorderComputeranimationBesprechung/InterviewProgramm/Quellcode
02:44
DifferenteEreignishorizontDigitalisierungAbenteuerspielVerschiebungsoperatorMultiplikationsoperatorArithmetischer AusdruckZahlenbereichTotal <Mathematik>PunktComputervirusHilfesystemWechselsprungGüte der AnpassungQuaderOntologie <Wissensverarbeitung>Formation <Mathematik>Office-PaketPunktwolkeDatenmissbrauchObjekt <Kategorie>ModallogikResultanteEinfügungsdämpfungDigitale PhotographieStatistikSichtenkonzeptMAPRechter WinkelBesprechung/Interview
08:23
ZählenRechenschieberComputervirusAbenteuerspielSchlussregelMultiplikationsoperatorRPCTelekommunikationVerschiebungsoperatorGraphiktablettAlgorithmische ProgrammierspracheMAPSchätzfunktionNotebook-ComputerSystemaufrufPhysikalische TheorieInhalt <Mathematik>SkriptspracheGeradeNetz <Graphische Darstellung>SchnittmengeBenutzerschnittstellenverwaltungssystemSchreib-Lese-KopfDatenstrukturZahlenbereichStatistikBitTwitter <Softwareplattform>Message-PassingVollständiger VerbandVererbungshierarchieHilfesystemE-MailAdditionRückkopplungProgramm/QuellcodeComputeranimationBesprechung/Interview
13:01
GeradeMarketinginformationssystemNP-hartes ProblemBroadcastingverfahrenInklusion <Mathematik>Video-CDWort <Informatik>Meta-TagMenütechnikSinusfunktionTotal <Mathematik>CD-ISelbst organisierendes SystemVideokonferenzZahlenbereichJensen-MaßBitGeradeEreignishorizontProdukt <Mathematik>TelekommunikationSelbst organisierendes SystemZweiURLMehrrechnersystemTouchscreenComputerspielTotal <Mathematik>Zentrische StreckungRechenschieberVirtuelle MaschineTranscodierungBandmatrixMultiplikationsoperatorDatensatzNeunzehnWikiARM <Computerarchitektur>Inhalt <Mathematik>TaskInformationDifferenteOrdnung <Mathematik>Translation <Mathematik>Streaming <Kommunikationstechnik>MereologieUltraviolett-PhotoelektronenspektroskopieProgrammierungGruppenoperationWeb-SeitePerfekte GruppeRuhmasseFlächeninhaltp-BlockBroadcastingverfahrenHypermediaJust-in-Time-CompilerMixed RealityCDN-NetzwerkKoordinatenBenutzerbeteiligungBesprechung/InterviewProgramm/QuellcodeComputeranimation
22:11
CachingMomentenproblemLastServerDirekte numerische SimulationTypentheorieMessage-PassingMaschinenschreibenATMInternetworkingStreaming <Kommunikationstechnik>VideokonferenzStabPlotterUnordnungQuick-SortStreaming <Kommunikationstechnik>Open SourceMessage-PassingEinfach zusammenhängender RaumTwitter <Softwareplattform>GraphGarbentheorieZentrische StreckungRelativitätstheorieDienst <Informatik>MaschinenschreibenPhysikalisches SystemDifferenteSystemaufrufWeb-SeiteInternetworkingBildverstehenDatensatzProjektive EbeneSoftwareInhalt <Mathematik>PunktMultiplikationsoperatorBridge <Kommunikationstechnik>BitSichtenkonzeptMAPSprachsynthesePhysikalischer EffektUltraviolett-PhotoelektronenspektroskopieSoftwareentwicklerWeb SiteCASE <Informatik>Regulärer GraphMehrrechnersystemProdukt <Mathematik>Abstimmung <Frequenz>RechenschieberSoftwaretestZahlenbereichVerdeckungsrechnungKartesische KoordinatenTelekommunikationHoaxFächer <Mathematik>Direkte numerische SimulationHardwareKontrast <Statistik>Computerunterstützte ÜbersetzungHackerServerKeller <Informatik>Orbit <Mathematik>Offene MengeBandmatrixWikiRauschenLeistung <Physik>NormalvektorVideokonferenzLastteilungASCIIDokumentenverwaltungssystemComputeranimationProgramm/Quellcode
28:20
SoftwareSprachsyntheseGruppenoperationFormale SpracheDigitalisierungZahlenbereichEreignishorizontMultiplikationsoperatorLokales MinimumRechenschieberStatistikMetropolitan area networkZweiTranslation <Mathematik>UnordnungInterpretiererMittelwertProgramm/Quellcode
30:35
RPCEreignishorizontTranslation <Mathematik>Schnitt <Mathematik>Streaming <Kommunikationstechnik>Lesen <Datenverarbeitung>RechenschieberSet-Top-BoxMultiplikationsoperatorInhalt <Mathematik>MAPLeistung <Physik>Besprechung/Interview
32:10
SoftwareSprachsyntheseMittelwertVerschiebungsoperatorWeg <Topologie>FreewareTwitter <Softwareplattform>SISPSystemaufrufStatistikMaßerweiterungRippen <Informatik>RechenschieberTaskSoftwareComputerspielSprachsyntheseBeobachtungsstudieTopologiePhysikalisches SystemPunktVerschiebungsoperatorGebäude <Mathematik>BildschirmfensterInformationsspeicherungTotal <Mathematik>Zentrische StreckungZahlenbereichKontrollstrukturMereologieTwitter <Softwareplattform>AggregatzustandGamecontrollerARM <Computerarchitektur>MinimumMaßerweiterungBridge <Kommunikationstechnik>MustererkennungVideokonferenzDatensatzVollständigkeitSystemaufrufMAPDifferenteEreignishorizontZweiDezimalzahlMultiplikationsoperatorRechter WinkelBitMeterFDDINichtlinearer OperatorWorkstation <Musikinstrument>Kanban <Informatik>VerschlingungProgramm/QuellcodeComputeranimationBesprechung/Interview
38:27
Nichtlinearer OperatorMaßerweiterungSimulationRechnernetzMittelwertVideokonferenzPhysikalisches SystemServerRechenschieberZahlenbereichMaßerweiterungMathematikSchnitt <Mathematik>DigitaltechnikSoftwareRPCNeuroinformatikHalbleiterspeicherHecke-OperatorURLEreignishorizontKartesische KoordinatenHackerMultiplikationsoperatorSystemaufrufFlächeninhaltGeradeVerschlingungATMPlastikkarteSimulationUnordnungFigurierte ZahlMixed RealitySchmelze <Betrieb>Abgeschlossene MengeComputeranimationProgramm/Quellcode
42:49
p-BlockWort <Informatik>UnordnungHalbleiterspeicherTwitter <Softwareplattform>StapeldateiMessage-PassingMusterspracheMehrrechnersystemInternetworkingEreignishorizontVerdeckungsrechnungMikrocontrollerPixelPhysikalismusGruppenoperationEinsVierzigRechter Winkelt-TestAutomatische HandlungsplanungRelativitätstheorieComputeranimation
45:04
EreignishorizontPixelUnordnungMatchingEinsEreignishorizontBeweistheorieGüte der AnpassungAmenable GruppeMAPRechenschieberPixelMusterspracheTabelleElektronische UnterschriftMereologieSystemaufrufReelle ZahlNeunzehnLeistung <Physik>StichprobenumfangBesprechung/InterviewProgramm/QuellcodeComputeranimation
46:31
DezimalzahlLoginRechenschieberInternetworkingBitOrdnung <Mathematik>MehrrechnersystemSoftwareentwicklerAbenteuerspielProjektive EbeneChord <Kommunikationsprotokoll>InstantiierungDienst <Informatik>Produkt <Mathematik>Arithmetisches MittelMAPWechselsprungProgrammfehlerDokumentenserverEinsAggregatzustandKlon <Mathematik>App <Programm>RankingHilfesystemMessage-PassingRPCLoopDatensatzZweiVerkehrsinformationLoginFormation <Mathematik>Programm/QuellcodeComputeranimation
51:44
FlächeninhaltHardwareHackerTotal <Mathematik>MultiplikationsoperatorGeradeFrequenzServerBildschirmmaskeInstantiierungGraphRechenschieberAssemblerPunktMapping <Computergraphik>ZahlenbereichMAPHardwareDatensatzVideokonferenzMaschinenspracheFlächeninhaltSpeicherabzugKreisbogenZweiProgramm/QuellcodeComputeranimation
56:57
HardwareKartesische KoordinatenMinimumAutomatische HandlungsplanungAbstimmung <Frequenz>MultiplikationsoperatorMehrrechnersystemTonnelierter RaumParametersystemRechenschieberAbenteuerspielSoftwaretestPunktwolkeServerComputeranimationProgramm/Quellcode
59:24
TopologieTouchscreenZahlenbereichIndexberechnungMultiplikationsoperatorDokumentenserverZusammenhängender GraphMehrrechnersystemIntegralSoftwareEinsEreignishorizontFamilie <Mathematik>Physikalisches SystemSystemplattformTeilmengeAssemblerBesprechung/Interview
01:01:39
Wort <Informatik>EreignishorizontZweiProdukt <Mathematik>HyperbelverfahrenStapeldateiEreignishorizontOpen SourceEinsAuthentifikationTotal <Mathematik>ZahlenbereichServerARM <Computerarchitektur>DeterminanteAbenteuerspielPhysikalischer EffektProgramm/Quellcode
01:03:50
Coxeter-GruppeEreignishorizontDokumentenserverEindeutigkeitTopologieExploitPixelGeradeKontrollstrukturGruppoidWort <Informatik>Wald <Graphentheorie>MultiplikationsoperatorGeradePixelAuswahlverfahrenMinkowski-MetrikMathematikZahlenbereichQuick-SortDokumentenserverEreignishorizontGrenzschichtablösungStreaming <Kommunikationstechnik>InternetworkingSelbst organisierendes SystemTopologieProgramm/QuellcodeComputeranimation
01:07:14
GruppoidEreignishorizontCross-site scriptingATMRückkopplungUmwandlungsenthalpieAssemblerNichtlinearer OperatorInklusion <Mathematik>Twitter <Softwareplattform>MaschinenschreibenPunktProzess <Informatik>RückkopplungGüte der AnpassungTrennschärfe <Statistik>FunktionalSpannweite <Stochastik>AggregatzustandPixelFeuchteleitungE-MailEnergiedichteMultiplikationsoperatorRobotikTouchscreenKlasse <Mathematik>ComputerspielInternetworkingATMEreignishorizontArithmetisches MittelElektronischer ProgrammführerFontTemplateVideokonferenz
01:09:15
AggregatzustandInformationUnordnungExakte SequenzKartesische KoordinatenVideokonferenzNeuroinformatikTwitter <Softwareplattform>VererbungshierarchieE-MailDomain <Netzwerk>Service providerMetropolitan area networkMessage-PassingZweiBitMultiplikationsoperatorAdressraumSondierungWeb SiteCookie <Internet>Prozess <Informatik>DruckspannungComputerspielPunktKonfiguration <Informatik>Wort <Informatik>Rechter WinkelComputerunterstützte ÜbersetzungEchtzeitsystemInformationQuick-SortEinfügungsdämpfungLokales MinimumExistenzsatzZahlenbereichTouchscreenOrdnung <Mathematik>Dienst <Informatik>FlächeninhaltCOMTopologieGeradeHilfesystemVektorpotenzialUnordnungWürfelStreaming <Kommunikationstechnik>RechenschieberVersionsverwaltungOffice-PaketGewicht <Ausgleichsrechnung>Formation <Mathematik>Coxeter-GruppeZeichenketteDifferenteFestplatteComputervirusVerschlingungEreignishorizontProgramm/Quellcode
01:18:37
BefehlsprozessorGamecontrollerDisk-ArrayServerVideokonferenzZählenSoftwareChiffrierungMini-DiscPhysikalisches SystemRechnernetzMAPTuring-TestLokales MinimumLeistung <Physik>BitrateWeitverkehrsnetzRouterArchitektur <Informatik>Pay-TVKanalkapazitätTotal <Mathematik>BandmatrixZeichenketteVirtuelle MaschineEreignishorizontMultiplikationsoperatorÜberlagerung <Mathematik>RechenwerkGoogolUnternehmensarchitekturÜberlastkontrolleGamecontrollerGesetz <Physik>Dienst <Informatik>Lokales MinimumPunktTopologieRechter WinkelVierzigReelle ZahlZweiProtokoll <Datenverarbeitungssystem>DatenstrukturSoftwarePhysikalische TheorieServerDatenverwaltungBandmatrixSystemverwaltungSpeicherabzugDisk-ArrayInformationKonfigurationsraumMini-DiscVersionsverwaltungChiffrierungPay-TVDirekte numerische SimulationURLPunktwolkeBitrateInzidenzalgebraVerschiebungsoperatorRechenzentrumPhysikalisches SystemTotal <Mathematik>HardwareBefehlsprozessorKanalkapazitätFormation <Mathematik>Luenberger-BeobachterTouchscreenRechenschieberQuaderComputerarchitekturVorhersagbarkeitNetzadresseLoginCASE <Informatik>RouterBootenProgramm/Quellcode
01:25:42
ServerVideokonferenzZählenVollständiger VerbandWeb logSkalierbarkeitLastVersionsverwaltungAggregatzustandDienst <Informatik>Pay-TVDirekte numerische SimulationSoftwareTopologieHypermediaMessage-PassingGefangenendilemmaMereologieOffice-PaketStellenringURLMAPGraphiktablettWärmeleitfähigkeitPay-TVMultiplikationsoperatorVierzigBestimmtheitsmaßVideokonferenzGüte der AnpassungVertauschungsrelationMobiles InternetPunktUmsetzung <Informatik>Service providerKontextbezogenes SystemBildschirmmaskeTotal <Mathematik>Konstruktor <Informatik>ZweiDemoszene <Programmierung>LastNeuroinformatikMailing-ListeWellenpaketSchreib-Lese-KopfMehrrechnersystemRechenwerkWeb logServerBridge <Kommunikationstechnik>RoutingZählenZusammenhängender GraphDirekte numerische SimulationWeb-SeiteSystemaufrufSoftwaretestProjektive EbeneSpeicherabzugVollständiger VerbandDatenverwaltungQuellcodeEreignishorizontRechenschieberQuaderProgramm/Quellcode
01:32:46
PunktMixed RealitySystemaufrufAbgeschlossene MengeMatrizenrechnungFront-End <Software>NeuroinformatikVideokonferenzRechenschieberSchnitt <Mathematik>MereologieTelekommunikationLie-GruppeGesetz <Physik>BeobachtungsstudieStreaming <Kommunikationstechnik>Technische Zeichnung
01:35:30
Finite-Elemente-MethodeZentrische StreckungBesprechung/InterviewComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:03
Hello and welcome to the infrastructure review of the RC3 this year 2020. What the hell happened?
00:20
How could it happen? I'm not alone this year. With me is Lindworm, who will help me with the slides and everything else. I'm going to say before my end this is going to be a great fuck-up, like last year maybe. We have more teams, more people, more streams, more of everything. The first team, Lindworm, who I'm going to introduce, is The Shock.
00:47
I've got to go to The Shock. It's kind of a stress this year. We only had about 18 heralds for the main talks, RC1 and RC2, and we have introduced about 51 talks with that.
01:03
Everybody from his home set up, which was a very, very hard struggle, so we all had a metric ton of adrenaline and excitement within us. Here you can see what you have seen, how a herald looks from the front, and so it does look in the background.
01:23
That was hard, really hard for us. You see all our different setups here that we have, and we are very, very pleased to also have set up a completely new operation center, the Herald News Show, which I really, really like you to review on YouTube.
01:45
This was such a struggle, and we have about, oh, wait a second. As we said, we're a little bit unprepared here. I need to have my notes up. There were 20 members that formed a new team on the first day. They made 23 shows, 10 hours of video recording,
02:06
20 times the pizza man rung at the door, and 23 motto bottles had been drunk during the preps because all of those people needed to be online the complete time.
02:21
I really applaud to them. That was really awesome what they brought over the team and what they brought over the stream. This is an awesome team. I hope we see more of. Joseph, would you take it over? Oh, no. My bad. So, is the heaven ready?
02:44
We need to go to the heaven and have an infrastructure review of the heaven.
03:01
Okay. Hello. I'm Radiolos from heaven. Yeah, heaven is ready. So, welcome everybody.
03:36
I'm Radiolos from heaven, and I will present you the infrastructure review from the heaven
03:53
This year, we have not so much angel last year because we had a remote event, but we had a total of 1,487 total angels from which 710 arrived, and even more of 300 angels
04:15
that at least still did one shift. And in total, the recorded work done to that point was
04:28
roughly 17 and 75 weeks of done working hours. And for the RC3 world, we also
04:41
prepared a few goodies so people could come visit us. And so, we provided a few badges there, and every angel that, for example, found our extinguished or expired extinguisher and also
05:05
extremely just a fire in heaven. The first badge was achieved from 232 of our angels, and even less, but still a good number of 125 angels accomplished to help us and extinguish
05:26
the fire that broke out during Remedent. And with that numbers in check, we also will jump into our heaven. So, I would like to show some expressions and impressions from it. We had
05:44
quite the team working to do exactly what the heaven could do, manage its people. So, we needed our heaven office, and we also did this with respect to your privacy. So, we painted our clouds white as ever, so we cannot see your nicknames, and you could do
06:08
your angel work, but not be bothered with us asking for your names. And also, we had prepared some secret passage to our back office, and every time on the real event, it would happen that some
06:27
adventurers would find their way into our back office. And so, we needed to provide that opportunity as well, as you can see here. And let me say that some adventurers tried to find
06:42
the way in our sacred digital back office, but only few were successful. So, we hope everyone found its way back into the real world from our labyrinth. And we also did not spare any expenses to do some additional update for our angels as well. As you can see,
07:07
we tried to do some multi-instance support. So, some of our angels also accomplished to split up and serve more than one angel at the time, and that was quite awesome. And so, we tried to
07:22
provide the same things we would do on congress, but now from our remote offices. And one last thing that normally doesn't need to be said, but I think in this year and with this different
07:45
kind of event, I think it's necessary that The Heaven as a representative, mostly for people trying to help make this event awesome, I think it's time to say the things we do take for
08:05
granted. And that is thank you for all your help. Thank you for all the entities, all the teams, all the participants that achieved the goal to bring our real congress that many, many
08:23
entities missed this year into a new stage. We tried that online. It's up and down, but I still think it was an awesome adventure for everyone. And from The Heaven team, I can only say thank you. And I hope to see you all again in the future on a real event. Bye and have a nice
08:47
new year. Hello, hello back again. So, we now are switching over to the Signal Angels. Are the
09:14
Signal Agents ready? Hello. Yeah, hello. Welcome to the infrastructure review for the
09:26
Signal Angels. I have prepared some stuff for you. This was for us, slide please, this was for us the first time running a fully remote Q&A session set, I guess. We had some experience with
09:48
DVOC and had gotten some help from there on how to do this. But just to compare, our usual procedure is to have a Signal Angel in the room. They collect the question on their laptop there and they communicate with the Herald on stage. And they have a microphone like I'm wearing
10:06
a headset, but in there we have a studio microphone and we speak questions into it. Yeah, but remotely we really can't do that. Next slide. Because, well, it would be quite a lot of hassle for everyone to set up good audio setups. So, we needed a new remote procedure.
10:26
So, we figured out that the Signal Angel and the Herald could communicate via pad and we could also collect the question in there and the Herald will read the question to the speaker and collect feedback and stuff. So, we had a 100, 50, 75, no, 57 shifts and sadly we couldn't
10:53
fill five of them in the beginning because there was not enough people already
11:02
there. And yeah, also technically it was more than five unfilled shifts because for some reasons there were DJ sets and other things that aren't talks and also don't have Q&A. Yeah, we had 61 Angels coordinated by four supporters, so me and three other people.
11:21
And we had 60 additional Angels that in theory we wanted to do Signal Angel work, but didn't show up to the introduction meeting. And yeah, next. As I've said for each session, each talk we created a pad where we put in the questions from IRC, mastered on Twitter and
11:43
where we have a bit more pads than talks we actually handled. And I have some statistics about an estimated number of questions per talk. What we usually assume is that there's a question per line, but some questions are really long and have to split over multiple lines. There
12:02
are some structured questions with headings and paragraphs. Some Heralds or Signal Angels removed questions after they were done and also there was some chat and other communication in there. So next slide. We took a Python script, downloaded all the pad contents, read them, counted lines,
12:21
removed the size of the static header and in the end we had 179 pads and 1627 lines if we discount the static header of nine lines per pad. So that in theory leads to about
12:41
nine questions in quotation marks, because it's not really questions, but lines, but it's an estimate per talk. Thank you. Talk and what I've learned is never miss the introduction. So the next in line are the line producers. STB are you there?
13:05
I am here in fact. So
13:31
So the people a bit older might recognize this melody badly sung by yours truly and other members of the line producers team and I'll get to why that is relevant to what we've been
13:47
doing at this particular event. So what do line producers do? What does an auf namme leitung actually perform? It's basically communication between everybody who is involved in the production,
14:01
the people behind the camera and also in front of the camera. And so our work started really early. Basically at the beginning of November taking on like prepping speakers in a technical setup and rehearsing with them a little bit and then enabling the studios to allow them
14:22
to actually do the production, coordinate on an organizational side. The technical side was handled by the WOC and we'll get to hear about that in a minute. But getting all these people synced up and working together well, that was quite a challenge and that took a lot
14:41
of mumbles with a lot of people in them. We only worked on the two main channels. There's quite a few more channels that are run independently of kind of the central organization and again we'll get to hear about the details of that in a minute. And so we provided
15:02
information. We tried to fill wiki pages with relevant information for everybody involved. So that was our main task. So what does that mean specifically, the production setup? We had 25 studios mainly in Germany, also one in Switzerland. These did produce
15:27
recordings ahead of time for some speakers and many did live setups for their own channels and also for the two main channels. And I've listed everybody involved in the live production
15:41
here and there were 19 channels in total. So a lot of stuff happening, 25 studios, 19 channels that broadcast content produced by these studios. So that's kind of the Eurovision kind of thing where you have different studios producing content and trying to mix it all together.
16:02
Again, the WOC took care of the technical side of things very admirably, but getting everybody on the same page to actually do this was not easy. For the talk program we had over 350 talks in total, 53 in the main channels. And so handling all that, making sure everybody has the
16:24
speaker information they need, and all these organizational stuff, that was a lot of work. So we didn't have a studio for the main channels, the 25 studios or the live channels, the 12, they actually did provide the production facilities for the speakers.
16:45
So we can look at the next slide, there's a couple more numbers and of course a couple pictures from us working basically from today. We had 53 talks in the main channel, 18 of them were pre-recorded and played out. We had three where people were actually
17:06
on location in a studio and gave their talk from there. And we had 32 that were streamed live, like I am speaking to you now, with various technical bits that again the WOC will go into in a minute. And we did a lot of Q&As, I don't have the number how many talks actually
17:24
had Q&As, but most of them did, and those were always live. We had a total of 63 speakers, we did prepare at least the live Q&A session for, and helped them set up, we helped them record their talks if they wanted to pre-record them. So we spent anywhere between one and two
17:44
hours with every speaker to make sure they would appear correctly and in good quality on the screen. And then during the four days we of course helped coordinate between the master control room and the 12 live studios to make sure that the speakers were where they
18:01
were supposed to be and any technical glitches could be worked out and decide on the spot. If for example the line producers made a mistake and a talk couldn't happen as we had planned because we forgot something, so we rescheduled and found a new spot for the speakers. So apologies again for that and thank you for your understanding and helping us bring you on
18:23
screen on day two and not day one, but I'm very glad that we could work that out. And that's pretty much it from the line producers. I think next up is the WOC.
18:40
Thank you STB, yes you're right. The next are the WOC and Kunzi and JWAC Alex are waiting for us. This is Francis from the WOC. 2020 was the year of distributed conferences. We had two
19:08
devox and the FrostCon to learn how we are going to produce remote talks. We learned a lot of stuff on organization, big blue button and JITs recording. We had a lot of other events
19:21
which was just streaming like business as usual. So for RC3 we extended the streaming CDN with two new locations, now seven in total with a total bandwidth of about 80 gigabits per second.
19:40
We have two new murals for media CCCDE and are now also distributing the front-end. We got two new transcoder machines, Airfas, enhanced setup. We now have 10 Airfas with own productions on media CCCDE. So the question is will it scale?
20:06
On the next slide, we will see that it did scale. We did produce content for 25 studios
20:22
and 19 channels. So we got lots of recordings which will be published on media CCC in the next days and weeks. Some have already been published. So there's a lot of content for you to watch.
20:40
And now Alex will tell us something about the technical part. My name is Alex. I will not tell you the technical part first but more of the organization. I was between the WOC and the line producing team and now a bit how it worked. So we had those two main channels SE1 and SE2. Those channels have been produced by the various studios
21:04
distributed around the whole country and those streams, this is now the upper path in the picture, went to our industry relay, to the FEM, to the master control room. In Ilmen now there are a team of people adding the translations, making the mix, making the mix down, making records
21:20
and then publishing it back to the streaming relays. All the other studios produced two channels. Those channels took also the signals from different studios, make a mix down, etc, published to our CDN and relays and we published to the studio channels. As you can see this is not the tutorial setup we had in the last year in the present. So our next slide we can see
21:44
where this leads. Lots of communication. We had the line producing team, we had some production in Ilmen now that has been coordinated. We have the studios, we have the local studio helping angels, we have some mumbles there, some work here, some CDN people, some web where something happens,
22:03
we have some documentation that should be and then we started to plot down the communication paths. Next slide please. If you plotted all of them it really looks like the world but this is actually the world but sometimes it feels like they're just getting lost in different paths. Who you have to ask, who you have to call, where are you, what's the shortest path to communicate.
22:27
But let's have a look at the studios. First going to Chaos West, Kunzi. Yes, on the next slide you will see the studio setup at Chaos West TV. So thank you Chaos West for producing your channel. At the next slide you see the
22:49
bikipaka, television and fans to even VTF who have the internal motto, absolutely not the fake cause of recording. But even then at some studios they look more like studios. So this time at the next slide at the hack. Yeah at hack you will also see
23:08
some of the bloopers we had to deal with. So for example here you can see there was a cat in the camera view. So yeah and Alex tell us about the open infrastructure orbit.
23:27
The open infrastructure orbit show in this picture you can see it's really artsy. How you can make a studio looking really nice even if you're alone there feeling a bit comfy a bit more hackish. But you have also those normal productions as in next slide the
23:44
Chaos Studio Hamburg we had two regular walk cases like you know from all the other conferences and they were producing on-site in a regular studio setup.
24:00
And last but not least we got some impressions from Chaos owner TV. As you can see here also quite regular studio setup, quite regular. Now there was some corona ongoing and this is we had a lot of distancing wearing masks it's all the stuff that everyone can be is safe. But C3 Yellow, C3 Gelp will tell you something else about it. But look
24:23
let's look at the nice things. For example the minor issue. On the second day we were sitting there looking at our nice Grafana. Oh we got a lot of more connections the server increasing. The first question was have we enabled our cache? We don't know but the number of
24:42
connections growing then people are watching our streams the ingest goes up and we were well at least the people are watching the streams. If there's also a website on the website who cares the ingest works but then we suddenly get relations. Well something did not really scale
25:02
that good. And then you see on the next slide the issue. We switched pretty fast from after looking at this traffic graph well that's interesting into well we should investigate. We get thousands of messages on twitter dms we get thousands of messages in rocket chat asc and
25:20
suddenly we had a lot of connections to handle a lot of inquiries to handle and that of phone calls etc to handle and they have to prioritize first the hardware then the communication because otherwise the application won't stop. On the next slide you can see what our minor issue was. So at first we get a lot of connections so streaming web pages
25:42
then to a load balancers and finally to our DNS servers. A lot of them were quite malformed it looked like a storm but the more important thing we had to deal was all those passive aggressive messages from people from different persons who said well you can't even
26:00
handle streaming what are you doing here and we working together if it's c3 infra team thanks for that how to scale decent price even more just to provide the people the connection power they need so i think in contrast to last year's we don't need to use more bandwidth we sure we can provide even more bandwidth if we need it and then now tearing everything down
26:28
so is it time to shut everything down no we won't shut everything down the studios can keep their endpoints can continue to stream on their endpoints
26:42
as they wish we want to keep in touch with you and the studios produce content with you improve our software stack improve other things like the isdn the internet streaming digital note the project for small camera recording setups for sending two speakers needs
27:07
developers for the software also kevin needs developers and testers what's kevin oh we have prepared another slide of the next slide kevin is short for killer experimental
27:23
video internet noise because we initially wanted to use obs ninja but there are a couple of licensing issues there is not everything available obs ninja is open source like we wanted so we
27:41
decided to code our own obs ninja style software so if you are interested in doing so please get into contact with us or visit the wiki so that's all from the book and we're now heading over to c3 lingo exactly c3 lingo oscar should be waiting in studio two
28:12
aren't you yeah hello um hi uh yeah i'm oscar from c3 lingo um we will jump straight into this
28:33
into the stats uh on our slides as you can see here we translated 138 talks this time
28:45
um as you can see it's also way less languages than in the other chaos events that we had since our second languages team that does everything that is not english and german was only five people strong this time so we only managed to do five talks into french and three talks into
29:03
brazilian portuguese and then on the next slide we are looking at our coverage for the talks and we can see that on the main talks we managed to cover all talks that were happening from english to german and german to english depending on what the salt language was and then
29:26
on the other languages track we only managed to do 15 of the talks from the main channels and then on the further channels which is a couple of others that also were provided to us in the translation team we managed to do 68 of the talks but none of them were
29:45
there were translated into other languages than english and german on the next slide some some global stats we had 36 interpreters which in total managed to translate 106 hours and seven
30:01
minutes of talks into another language simultaneously and the maximum number of hours one person did was 16 hours and the minimum number of hours the average number of hours people did was around three hours of translating across the entire event
30:20
all right then i also have some anecdotes to tell and some some mentions i want to do we had two new interpreters that we want to say hi to we had a couple of issues with the digital thing that didn't have before with regular events where people were present for example the issue of sometimes when two people are translating one person starts and shepherds
30:45
something on the wrong stream maybe they were watching and watching the whole thing and then the partner just thinks they have more delay or something or for example a partner having a smaller delay and then thinking that the partner can suddenly reach my read minds because they can translate faster than the other person is actually seeing the stream those are issues that
31:02
we usually didn't have with the regular stream but only with with the regular events not with remote events and yeah some some hurdles to overcome um another thing was for example on when on the r3s stage the audio cut out sometimes was
31:22
and uh but because one of our translators had also already translated the talk twice at least partially too because and it was already cancelled after those they basically knew most of the content could basically do a powerpoint kara otk translation and was able to do most of the talk uh just from the slides without any audio
31:46
um yeah and then there there also was um uh yeah the last thing i want to say is actually i want to say a big give a big shout out to the two of our team members that weren't able to interpret with us this time because they put their heart and soul into
32:02
this event happening and that's stb and caddy and uh that's basically everything from c3 thanks c3 subtitles is it now td will show the right text to his to his slides you already saw
32:37
a minute ago okay okay hi so i'm td from the c3 subtitles team and next slide please uh so
32:52
just to quickly let you know how we get from the recorded talks to the release subtitles well we take the the recording videos and apply speech recognition software to get a raw
33:04
transcript and then angels work on that transcript to correct all the mistakes that the speech recognition software makes and we again apply some auto timing magic to to get some raw subtitles and then again angels do quality control on these tracks to get released subtitles next slide please so as you can see we have various sub-title tracks
33:25
in different stages of completion and these are seconds of material that we have you can see all the numbers are going up and to the right as they should be so next slide please um in total we had 68 distinct angels that worked four shifts on average
33:43
83 percent of our angels returned for a second shift 10 percent of our angels worked 12 or more shifts and in some we had 382 hours of angel work for 47 hours of material so far we've had two releases for rc3 and hopefully more yet to come and 37 releases for older congresses mostly
34:05
on the first few days where we didn't have many recordings we have 41 hours still on the transcribing stage of material 26 hours of material in the timing stage and 51 hours material in the quality control stage so there's still lots of work to be done next slide please
34:23
when you have transcripts you can do fun stuff with them for example you can see that important to people in this talk are people we're working on other cool features that are yet to come stay tuned for that next slide please so to keep track of all these tasks we've been using a state-of-the-art high performance log free no SQL columnar data store
34:45
aka a Kanboard in the previous years and because we don't have any windows in the CCL building anymore we had to virtualize that so we're using Kanban software now um at this point i would like to thank all our hard-working angels for their work and next slide please if you're feeling bored between congresses then you can work on some
35:05
transcripts just go to c3subtitles.de if you're interested in our work follow us on twitter and there's also a link to the release subtitles here so that's all thank you thank you TD
35:22
and before we go into the POC where Drake is waiting i'm sure everyone is just asking why are those guys setting and saying um next slide so wait in the end we have the infrastructure review of the infrastructure review meter going on so be patient now Drake are you ready in studio
35:45
one okay hello i'm Drake from the phone operation center and i'd like to present you
36:02
our numbers and maybe some anecdotes um at the end of our part so please switch to the next slide and let's get into the numbers first um so first of um first off you um registered about 1950
36:23
5195 extensions which is about 500 more than you registered on the last congress also you did about 21 000 calls um a little bit less than on the last congress but yeah we are still quite proud of what you have used our system with and yeah it runs quite
36:46
stable and yeah as you may notice on the bottom we also had about 23 decked antennas at the congress or at this little event so please switch to the next slide and this is our new feature it's is called the next slide it is called the event phone
37:08
decentralized decked infrastructure which we especially prepared for this event the fddi so we had about 23 rfps online throughout germany um with 68
37:23
deck telephones registered to it um but it's not only the um the german part that we covered we actually had one mobile station walking out through austria through paso i think so indeed we had an european event phone decked decentralized infrastructure um next slide
37:47
please um we also have some anecdotes so maybe some of you have noticed that we had a public phone a working public phone in the rc world where you could call other people on the
38:01
telephone system and also other people started to play with our system and uh i think about yesterday someone started to introduce the seed c3 fire so you could actually control a flamethrower through our telephone system and i like to present here
38:23
a video um next slide please maybe you can play it i have quite a delay and waiting for the video to play so what you can see here is um the c3 fire system um
38:41
actually controlled by a deck telephone somewhere in germany so um next slide please we we also provided you with sstv servers um why are the phone number 229
39:01
um did you um yeah where you could receive some pictures from event phone um like a postcard basically so basically you could call the number and receive a picture or some other pictures or more pictures and next slide please um yeah basically that's all from uh the event phone
39:27
and with that we say thank you all for the nice and awesome event and uh yeah bye from the first certified assembly pock bye thank you pock and hello gsm link this is waiting for us
39:51
yeah hello um i'm link this um i'm from the gsm team and this year was quite different as you can imagine um however um next slide please so um but we managed to get a small
40:07
network running um and also a couple of sim cards registering um so where are we now so next slide please as you can see we are just there in the red dot there's not even a single
40:24
line for our five extensions um but even we managed 130 calls over five extensions and next slide please um so we got um so we got five extensions um registered with four sim cards
40:44
and three locations with mixed technologies also two users so far sadly and one network with more or less zero problems um so let's take a look on the coverage so next slide please
41:02
so we we are quite lucky that we managed to get an international network running so we got two um two base stations uh in berlin one in the hackerspace in afra and another one north of berlin and uh yeah one uh one of our members is uh currently in mexico and uh yeah he's
41:23
providing uh the the remote chaos uh network figure um yeah so um that's basically our network um so before we going to the next slide um we have um what what we have done so far is um
41:45
yeah it's we are just two people instead of uh 10 to 20 and had some fun with improving our network and preparing for yeah the next congress and um next slide please and uh yeah now i'm closing with the edge computing uh we improved our edge capabilities
42:07
and um yeah i wish you a uh hopefully better year and uh yeah maybe see you next next even mode or in person have fun thank you and uh i give a hand to the lindvorm for
42:27
doing the slide dj all the time and he now has to switch to the hexen whom are next they bring an image and melt size waiting for us in studio three
42:47
hello what's phones without people so i'll give you now an introduction and always hear how many people we needed to run the whole hexen assembly we had around 20 organizing hexen and we had around 20 speaker in our wins and we had in total around 40 events but i'm
43:04
pretty sure that i even don't know all of these as just as you realize a word is pretty large so we needed around 7 million pixels to display the whole hudson world and that needed around 400 commits on that on our github corner of the internet
43:23
around 130 people received the fireplace batch in our case and around 100 people tested our swimming pool and received that batch so great a year for not going really to swimming also around 449 people showed some very deep dedication and checked out all memorials in our
43:43
hexen assembly congratulations for that there were quite a many of these ones our events around our big blue button externally from the from the congress and so we had starting from day zero no legs and were able to host up to 133 people in one session and that was quite stable we also introduced our new members around 30 new action join just for
44:06
the congress and we increased now to the size of 440 hacks in overall also somewhat we got new twitter accounts following us so we have added over 200 more twitter accounts and so
44:21
our messages are getting hurt but besides the ritual world we also did some quite physical things first of all we distributed over 50 physical goody bags to the people with microcontrollers and self-serve masks in it as you can see on the picture and also sadly we shoved so many rc3 hexen themed trunks and they are now out of stock but they will be back
44:44
in january thank you no thank you and i'm going to send thanks to the cows partenen cows pet inan and who are waiting in studio one well this is mike from the chaos patterning team
45:12
we've been welcoming we've been welcoming new attendees and underrepresented minorities to the chaos community for over eight years we match up our mentees with experienced chaos mentors
45:22
these mentors help their mentees navigate our world of chaos events devoc was our first remote event and it was a good proof of concept for rc3 this year we had 65 amazing mentees and mentors two in-world mentee mentor match-up sessions one great assembly event hosted by two of our new mentees and a wonderful world map assembly
45:47
built with more than 1337 kilograms of multi-color pixels next slide please and here's a small part of our assembly with our signature propeller hat tables and thank you to the amazing chaos
46:04
patterning team fragilant yali asriel and lila fish and to our great mentees and mentors we're looking forward to meeting all of the new mentees at the next chaos event yeah i think that was my uh call so next up we'll have the let me see
46:39
the c3 adventure are you ready hello my name is rowing and i met and we will talk about the
46:56
c3 adventure the 2d world and what we did to bring it all online next slide please okay
47:06
so when we started out we looked into how we could bring a congress-like adventure to the remote experience and on october we started with the development and we had some trouble
47:28
in that we had multiple upstream merges that gave us some problems and also due to just congress being congress or remote experience being remote experience we needed to introduce
47:42
features a bit late or add features on the first day so auth was merged just in 4 30 p.m am in the first day and on the second day we finally fixed instance jumps you know when you walk from one map to the next we had some problems there but on the
48:08
enjoyed the badges that have finally been updated and brought into the world today what does that all mean since we started implementing there have been 400 git commits
48:23
in our repository all in all including the upstream mergers but i think the more interesting stuff is what has been done since the whole thing went live we had 200 additional commits fixing stuff and making the experience better for you
48:43
next slide in order to bring this all online we not only had to think about the product itself not only think about the world itself but we also had to think about the deployment um the first commit on the deployer it's a back current service that brings the experience to you
49:06
has been done on 26th of november we started the first instance the first clone of the work adventure through this deployer on 8th of december and a couple of days beforehand i was getting a bit swamped i couldn't do all of the work anymore because i had to
49:26
both of the projects and so my colleague took over for me and helped me out a lot so i'll give over to him to explain what he did yeah so imagine that that on day minus five i get a message from a friend that hey help is needed so i said okay let's do it and
49:48
uh rank tells me that okay so we can spawn an instance and we need to scale it somehow and that and i spawned the deployer and my music stops i streamed music from the internet and
50:05
i wonder why did it stop and i have noticed that oh there are a lot of logs now like a lot and i have finally and i'm day minus four notice that the reporter was spawning it's a
50:20
copies of itself each few seconds in a loop so that was the state back then since day minus four until day one we have basically written the thing and uh well day one we were ready well almost ready i mean uh we have like four instances deployed
50:46
and i forgot to mention that when we were about to deploy 200 ones at once it wouldn't work because all of all of the things would time out
51:00
so we patched things quickly and 13 o'clock we've had our first deployment this worked and everything was fine and why is everyone on one instance so it turns out that we've had a bug not in the retailer in the app that would move you
51:25
from the lobby to the lobby on a different map so during the first day we have we've had a lot of issues of people not seeing each other because they were all on different instances of the lobby so we were working hard and next slide please so we can see that
51:47
um we're working hard to reconfigure that to bring you together in the assemblies i think we have succeeded you can see the population graph on this slide
52:03
the first day was our almost most popular one and the next day it would seem that okay it's not as popular but we have hit the peak of 1 600 users that day
52:23
what else about this the most popular instance was the lobby of course the second most popular instance was hardware hacking career for a while then the third i think next slide please we have counted well first of all we have we've had in total about 205 assemblies
52:53
the number was increasing day by day because people through the whole congress they were working on their maps for a while uh third had over a thousand maps active in their assembly
53:07
which led to the map server crashing some of you might have noticed that it stopped working quite a few times during day three and they have reduced the number of maps to
53:21
255 and that was fine at the end of day three i have counted about 628 maps this is less than is than was available in reality because i you know it was the middle of the night as always and
53:48
it was it wasn't trivial to count them but in the maps i have found we have found over two million used tires so that's something you can really explore i wish i could have but
54:04
deploying this was also fun next slide please and what yeah just a quick interject i really want to thank everyone that has put work into their maps and made this whole experience
54:21
work we we provided the infrastructure but you provided the fun and so i really want to thank everyone yeah the more things happen on the infrastructure the more fun we have we especially don't like to sleep so we didn't i basically exchanged with
54:42
wrong the way that i slept five hours during the night and he slept five hours in the day and the rest of time we were up the record though is incorrect wrong is now 30 hours up straight because the budgets were too important to bring to you to go to sleep
55:08
the thing you see on this graph is undeployed instances we were redeploying things constantly usually in the form of redeploying half of the infrastructure at any given time
55:21
the way it was developed you wouldn't have noticed that you wouldn't be kicked off your instances but for a brief period of time you wouldn't be an enter you wouldn't be able to anyone but next slide i have been talking for a few days of the congress that i have been
55:43
implementing so the kubernetes thing because it's automatically deployed things and managed things and so on and i have noticed by day three that i have achieved through enlightenment and through automation because we have decided to redeploy everything at once
56:05
at some point the reason was that we are being dedosed and we had to change something to mitigate that and so we we did that and everything was fine but we made a typo
56:22
we made a typo and the deployment failed and once the deployment failed it deleted all the servers so yeah 405 servers got deleted by what i'm remembering was a single line so it was
56:45
brought up automatically and that wasn't a problem it was all fine but well to this human to automate mistakes is devops next slide uh what's important is was that this 405
57:02
servers was provided but by hasner we couldn't have done that without their infrastructure without their cloud the reason we got up so quickly after this was that the servers were deleted but they could have been reprovisioned almost instantly so the whole thing took like
57:23
10 minutes to get back up and next slide uh that's all thank you for all testing our infrastructure and see you next year thank you c3 adventure so um this was clearly the first
57:44
conference where that didn't clap for falling mater bottles if that's not the thing maybe try next year um the lounge and i know i have to ask for the next slide too
58:01
there are three lounge artists and i was asked to read every country where someone is in because everyone helped to make the launch what it was an awesome experience so there were berlin mexico city anduras london syriq stockholm amsterdam rostok glasgow
58:26
santiago de chile park hembush mayoka krakow tokyo philadelphia frankfurt amine kern moscow taipei taiwan hanover shanghai seoul seoul i think sorry vietnam hong kong
58:49
kalto and guatemala thank you guys for making the lounge so the next is the hub and hey shaw should be waiting in studio two
59:27
software is based in jango and it's intended to be used for the next event um the problem is it
59:41
was a new software uh we had to do a lot of integrations um yeah live um during setup during uh daisy or day oh okay
01:00:05
OK, yeah, hi. I'm presenting the hub, which is a software we wrote for this conference. Yeah, it's based on different components. All of them are based on Django.
01:00:22
It's intended to be used on future events as well. Our main problem was it's a new software. We wrote it, and yeah, a lot of integrations were only possible on day zero or day one. And yeah, so even still today on day four,
01:00:41
we did a lot of updates, commits to the repository. And even that numbers on the screens are already outdated again. But yeah, as you could possibly see, we have a lot of commits all day night or night long, only a small ditch at 6 AM.
01:01:02
Sorry for that. Next slide, please. And yeah, for the numbers, you are quite busy using the platform. Some of these numbers on the screen are already outdated again. Out of the 360 assemblies which registered,
01:01:22
only 300 got accepted. Most of them were, yeah, event, or people wanting to do a workshop and trying to register an assembly, or duplicates. So please organize yourself. Events, currently we have over 940 in the system.
01:01:41
You're still clicking events. Nice. Thanks for that. The events are coordinating with the studios. So we are integrating all of the events of all the studios and the individual ones and the self-organized sessions, all of them. A new feature, the batches.
01:02:02
Currently, you have created 411. And yeah, from these batches redeemed, we have 9,269 achievements and 19,000 stickers. Documentation, sadly, was 404 because we
01:02:22
were really busy doing stuff. Some documentation has already been written, but yeah, more documentation will come available later. We will open source the whole thing, of course. But right now, we're still in production
01:02:40
and cleaning up things. And yeah, finally, for some numbers, total requests per seconds were about 400. In the night when the world was redeploying, then we only had about 50 requests per second. But it maxed up to 700 requests per second.
01:03:02
And the authentication for the world for the 2D adventure, it was about 220 requests per second. More or less stable due to some bugs and due to some heavy usage. So yeah, we appreciate that you use the platform,
01:03:22
use the new hub, and hope to see you on the next event. Thanks. Hello, hub. Thank you, hub. And the next is Bittelas is waiting for us.
01:03:41
He's from the C3 Audi team. And he will tell us what he does and his team did this year. I'm Bittelas from C3 Audi. And we've been really busy this year.
01:04:03
As you can probably see by the numbers on my next slide, we have 37 confirmed Audi angels. And today, we surpassed the 200 hours mark.
01:04:22
We've had 10 organ numbers leading up to the event. And there are almost 5 million unique pixels in our repository. I'm pretty convinced we've managed to create the smallest fairy dust of C3 provided by an actual space engineer.
01:04:42
And the tree of solitude is not the only thing we've managed to contribute to this wonderful experience. On our next slide, you can see that we also contributed six panel sessions
01:05:01
for autistic creatures to discuss their experiences and five play sessions for them to socialize. We helped to contribute a talk, a podcast, and an external panel to the big streams.
01:05:20
And on our own panels, we've had up to 80 participants that needed to be split up to five breakout rooms so they could all have a meaningful discussion. And all their ideas and thoughts were anonymized and stored on more than 1,000 lines
01:05:45
of markdown documentation that you can find on the internet. But 1,000 lines of markdown wouldn't be enough for me to express the gratitude I have towards all the amazing creatures
01:06:03
that helped us make this experience happen for all the amazing teams that work with us. I'm so happy to see you again soon, but now I think I will need some solitude for myself.
01:06:26
Thank you, Bette Lars. So Lindvom, are you ready? The next one is a video, as far as I know.
01:06:41
It's from the C3XU separation center. I don't know their short name, C3IOWOC. And I'm just counting down, three, two, run, go.
01:07:20
So video is like a very difficult thing to play
01:07:23
in those days because we only used to do stuff live. Live means that a lot of pixels and traffic is done from this here, from this glass, to all the wires and cables and back to the glass of your screen. And this is like magic to me somehow,
01:07:42
although I am only being a robot to talk synchronously with all the head back. Okay, now I spent already enough time, I think to switch back to Lindy with the video.
01:08:03
Oh, I tell you what we're going to do. The event for everyone and especially people with... Hello everyone, I'm NWNG from the new C3 Inclusion Operations Center.
01:08:23
This year, we've been working on accessibility guides to help the organizing teams and assemblies improve the event for everyone and especially people with disabilities. We have also worked with other teams individually to figure out what can still be improved in their specific range of functions, but there's still a lot to catch up on.
01:08:42
Additionally, we have published a completely free and accessible CSS design template that features dark mode and an accessible font selection. And it still looks good without JavaScript. 100 internet points for that. For you visitors, we've been collecting your feedback through mail or Twitter and won't stop after the Congress.
01:09:02
We've stumbled across some barriers. Please get in touch via c3ioc.de or at C3 Inclusion on Twitter to tell us about your findings. Thanks a lot for having us. Thank you for the video. Finally, technique is working.
01:09:21
We should... Does someone know computers? Maybe? Kritis is one of them and he is writing in Studio One to tell us something about C3 Yellow or C3 Gold, if you hear that.
01:09:45
Yeah, welcome. I'm still looking at this hard drive. Maybe you remember this from the very beginning. It has to be disinfected really thoroughly. And I guess I can take it out by the end of the event. And for the next slide with the words, please, we did found roughly 777 hand wash options
01:10:06
and three FF waste disposal possibilities. We checked the correct date on almost all of the 175 disinfectant options you had around here.
01:10:21
And because at a certain point of time, people from CERT were not reachable in the CERT room because they were running around everywhere else in this great 2D world, we had the chance to bypass and channel all the information because there were two digital cats on a digital tree. And so we got the right help to the right option. Next slide, please.
01:10:42
We have a couple of options ongoing. A lot of work had been done before. We had all the studios with all the Corona things are going on before, but now we think we should really watch into an angel disinfectant swimming basin for the next time to have there the maximum option of cleanliness.
01:11:02
And we will talk with the BOC. If we can maybe achieve to use this globally maxi cubes for the chunk in the upcoming time. Apart from that, in order to get more back gluten and everything else, we need someone who is able to help us
01:11:21
with the potential for homeopathic substances. So if you feel welcome with that, please just drop us a line to info.3.3-gelp.de. Thank you very much and good luck.
01:11:41
Thank you, Critis. Finally happy to hear your voice. I only know you from Twitter where we treat our stuff together. Our ideas and your mind don't. Maybe you're going to change it, please. Talking about messages, Carl's post was here too. And a tree leader whom we already heard earlier,
01:12:03
has more to say. Okay, welcome, it's me again. I've changed outfits a bit. I'm not here for the Signal Angels anymore, but for Chaos Post. So yeah, we had online office this year again, as we had with the D-Box before. And I've got some mail numbers for you
01:12:23
that should be on the screen right now. If it's not, if it's still the title page, please switch to the first one where it lists a lot of numbers. And we had 576 messages delivered total. This is numbers from around half to six
01:12:44
and 12 of them we weren't able to deliver because well, non-existent mailboxes or full mailboxes mostly. We delivered mails to 34 TLDs, the most going to Germany to .de domains followed by .com, .org, .net
01:13:02
and to Austria with .at. We had a couple of motives you could choose from. The most popular one was a fairy dust at sunset. 95 people selected that. Next slide. About our service quality. We had a minimum delay from the message coming in,
01:13:21
us checking it and it going out for about a bit more than four seconds. The maximum delay was about seven hours. That was overnight when no agents were ready or they were all asleep or having, being busy with, I don't know, the lounge or something. And on average, a message took you,
01:13:41
took us 33 minutes from you putting it into our mailbox to it getting out. Some fun facts. We had issues delivering to T Online at the first two days, but we managed to get that fixed. A different mail provider refused our mail because it contained the string C3 world,
01:14:01
the domain in the mail text. And apparently new domains are scary and you can't trust them or something. And we had created a ticket with them. They fixed it and it was super fast, super nice service. Yeah, also some people tried to send digital postcards to Mustard on accounts
01:14:20
because they look like email addresses or something. Another thing that's not on a slide is we had another new feature this time that was our named recipients. So you could, for example, send mail to search without knowing their address. And they also have a really nice postcard wall
01:14:41
where you can see all the postcards you sent them. The link for that is on Twitter. Thank you. Thank you, Kars Past. Lindvorn, are you there? Yeah, yeah, I'm there, I'm there. Hello.
01:15:00
So we are almost done. I hear you. So I have to switch some more slides again. It's kind of stressy for me, really. You're doing an awesome job. Thank you for doing it.
01:15:21
So just out of curiosity, did you have problem accepting any cookies or so? No, not really. I heard somewhere some really smart people have problems using the site because of cookies.
01:15:40
Oh no, that was not my problem. I only couldn't use the site because of overcrowding. That was often one of my little problems. And please, I hope you don't see what I'm doing right now in the background with starting our paths and so on.
01:16:04
As far as I know, I mean... What I wanted to say to all of you, this was the first Congress where we have so many women and so many non-cis people running that show and being up front the camera and making everything up.
01:16:22
I would really thank you all. Thank you that you made that possible. And thank you that we get more and more diverse year by year. I can only second that. And now we are switching to the C3 infrastructure.
01:16:44
Yeah, we need to... I'm sure a lot of questions will be answered by them. And I tried to make up the slides for that, but I do not find them right now. Yep, now I'm on TV.
01:17:03
Yeah, welcome to the infrastructure review of the team infrastructure. I'm not quite sure if we have the newest revision of the slides cause my version of the stream isn't loading right now. So maybe Linfram, is it possible to press Control R?
01:17:24
And if you're seeing a burning computer, then we have the actual slides. It's just played over karaoke without deep background music. Yeah, everything up to the PowerPoint presentation in real time.
01:17:43
Now I'm seeing me. Let's wait a few seconds until we see a slide. You wanna wait the entire stream delay. It's just about 30 to one minute. Well done.
01:18:03
Yeah, I'm Ties and I'm waiting. And this is Patrick and he's waiting too. Yeah, but that's in the middle of the slides. Can we go? Okay. Yep. I'm now seeing something in the middle of the slides,
01:18:23
but yeah, it seems fine. Okay, yeah. We have the Team C3 info, our C3 info. We're creating the infrastructure. Next slide. And we had about nine terabytes of RAM
01:18:42
and 1,700 CPU cores on the whole event. There's only one dead SSD died because everything's broken. We had five dead RAID controllers and didn't bother to replace the RAID controllers,
01:19:02
just replaced them with new servers. And 100% uptime. Next slide. We looked about 42 hours on starting screens of enterprise servers. 20 minutes max is what HP delivered.
01:19:22
And we are now certified enterprise observers. We had only 27% visitors using IPv6. So that's even less than Google publishers. And even though we had almost full IPv6 coverage,
01:19:40
except some really, really shady out of band management networks, we're still not at the IPv6 coverage that we are hoping for. I'm not quite sure if that's the right slides, but I'm not quite sure where we are in the text.
01:20:03
So yeah, Patrick. Yeah, so before the Congress, there was one prediction. There is no way it cannot be not DNS and well, it was DNS at least once. So we checked that box and let's go over to the next topic, OS.
01:20:23
We've provisioned about 300 nodes and it was an Ansible powered madness. So yeah, there was full disk encryption on all nodes. No IPs locked in the access logs. We took extra care of that and we configured minimal logging wherever possible. So in the case of some problems,
01:20:42
we only had warnings available and yeah, no info logs, no debug logs, just the minimal logging configuration. And with some software, we had to pipe logs to def null because the software just wouldn't stop logging IPs and we didn't want that.
01:21:00
So no personal data in logs, so no GDPR headache and your data is safe with us. The Ansible madness I've talked about was a magical deployment that de-bootstrapped into the live system and assimilated into the LC3 infrastructure while it's still running. So if you didn't boot the machine,
01:21:21
then it was just running. When a OS deployment was broken, it was almost always due to network or routing, at least the OS team claims that and this claim is disputed by the network team, of course. At one time, the deployment broke because of a trigger heavy infra angel
01:21:43
but let's not talk about that. Of course, at this point, we want to announce our great cooperation with our gold sponsor, DDoS24.net, who provided an excellent service of handcrafted request to our infrastructure.
01:22:04
That was a great demand or great public demand with some million requests per second for a while. But even during the highest or peak demand, we were able to serve most of these services.
01:22:21
We've provided some infrastructure to live up and they've quickly made use of the provided infrastructure, deployed there, overall an amazing time to market. We had six locations and those six locations were some wildly different special snowflakes of all.
01:22:42
So we had this loss, 816 CPU cores there, two terabytes of RAM and we had 10 gigabits per second interconnect. There was also a one terabit per second InfiniBand available but sadly, we couldn't use that. It would have been nice.
01:23:01
The machine start had a weird and engine IPMI which made it hard to deploy there and the admin on location never deployed their metal hardware to a data center. So there were also some learning experience there. Fun fact about DDoS, this was the data center with the maximum heat. One server, seven units,
01:23:21
over 9,000 watts of power, 11.6 to be exact, where they had to take some creative heat management solutions. Next was Frankfurt. There we had 620 gigabit of total uplink capacity
01:23:44
and we actually only used 22 gigabit during peak demand, again, by our premiums on the DDoS24.net. There was zero network congestion and 1.5 gigabit per second were IP versions.
01:24:00
So there was no real traffic challenge for the network engineers of you. It was a full layer three architecture with MPLS between the LAN routers and there was a night shift on the 26th and 27th for more servers because some shipments didn't arrive yet.
01:24:24
The fun fact about this data center was the maximum bandwidth. Some servers there had 50 gigabit uplink on the server configured. It was the data center with the maximum manual intervention. Of course, we had the most infrastructure there and it wasn't oversubscribed at any point.
01:24:45
We had some hardware in Stuttgart which was basically the easiest deployment. There were also some night shifts but thanks to Neuner and team, this was a really easy deployment. It was also the most silent DC so no incident from day minus five until now.
01:25:04
So if you're currently watching from Stuttgart now, you can create some issues because now we said it. Walsburg was the smallest DC. We only had three service D and we managed to kill one hardware rate controller so we only could use two service there.
01:25:23
And then Hamburg was the data center with the minimum uptime. We never could deploy to this data center because there was a broken network and we couldn't provision anything there. And of course, the data center was the Hetzer cloud
01:25:41
where we deployed on all locations. The common fun fact, we received a COVID warning from the data center. Luckily it didn't affect us. It was at another location but thanks for the heads up and the warning. The team lead of a sponsor needed to install Proxmox in a DC with no knowledge
01:26:03
or without any clue what they were doing. We installed Proxmox in the Hamburg DC and no server actually wanted to talk to us so we had to give up on that. And there had to be a lorry relocated before we could deploy other servers.
01:26:22
So that was turning in the way there. Now let's get to Jitsi. Our peak user count were 1,105 users at the same time on the same cluster. I don't know if it was at the same time
01:26:40
as the peak user count but the peak conference count was 204 conferences. I hope you can still read that today but that is data from yesterday. And the peak conference size was 94 participants in a single conference. And let me give condolences to your computer
01:27:02
because that must have been hard on it. Our peak outgoing video traffic on the Jitsi video bridges was 1.3 gigabit per second and we had about three quarters of the participants
01:27:20
were streaming video and one quarter of them had video disabled. Interesting ratio. Our Jitsi deployment was completely automated with Ansible so it was zero to Jitsi in 15 minutes. We broke up the Jitsi cluster into four shards
01:27:41
to have better scalability and resilience. So if one shard went down, it would only affect a part of the conferences and not all of them because there are some infrastructure components that you can't really scale or cluster. So we went with a sharding route. Our Jitsi video bridges were about 42 percent peak usage
01:28:05
excluding our smallest video bridge which was only eight cores and eight gigabytes which we added in the beginning to test some stuff out and it remained in there. And yes, we over-provisioned a bit. There will also be a blog post on our Jitsi meet deployment coming in the future
01:28:22
and for the next time we, or for the upcoming days, we will enable 4K streaming on there. So why not use that? And we want to say thanks to the SS Meet project who contacted us after our initial load test
01:28:43
and gave us some tips to handle load effectively and so on. We also tried making deck call out, or call, no, deck call out working, spent 48 hours trying to get it to work but there were some troubles there.
01:29:04
So sadly, no adding deck participants to Jitsi conferences for now. Jitsi.lc3.world will be running over New Year so you can use that to get together with your friends
01:29:22
and so on over the New Year. Stay separate, don't visit each other, please, don't contribute to your COVID-19 spread. You've got the alternative there. Now let's go over to monitoring, Jitsi. Yeah, thanks. First of all, it's really funny how you edit this page
01:29:43
but reveal.js doesn't work that way until Lindvorm reloads the page which hopefully it doesn't do right now. Everything's fine so you can leave it to me. Yeah, monitoring. We had the Prometheus and alert manager set up completely driven out of our solemnly
01:30:03
one and only source of truth, our net box. We received about 43,885 critical alerts. It's looking at my mobile phone, it's definitely more right now. And about 13,070 warnings,
01:30:20
also definitely more right now. And we tended about 100 of them. The rest was kind of useless. Next slide, please. As it's important to have an abuse hotline
01:30:42
and to abuse contact, received two network abuse messages, both from Hetzner, one of our providers, letting us know that someone doesn't like our infrastructure as much as we do, props to DDoS24.net and we got one call at our abuse hotline and it was a person
01:31:03
who wanted to buy a ticket from us. Sadly, we were out of tickets. Next slide, please. Some other stuff is we got a premium Ansible deployment brought to you by TuringComplete YAML. That sounds scary.
01:31:21
And we had about 130k DNS updates, thanks to the world team at this point. They're really stressing our DNS RP with redeployments. And also our DNS Prometheus and Grafana are deployed on and by NixOS thanks to Flubger
01:31:43
and head over to Flubger's interwebs thingy. He wrote some blog posts about how to deploy stuff with his NixOS. And the next slide, please. And the last slide from the intro team is the list of our sponsors, huge thanks to all of them.
01:32:03
It won't be possible to create such a huge event and such loads of infrastructure without them. And that's everything we have. Amazing. Thank you for all you've done, truly incredible
01:32:25
and sharing everything to the public. So I promised that there will be a kind of behind the scenes look of this infrastructure talk or review and I really have nothing to do with it. Everything was done by completely different people. I'm only a hero somehow lost and to this dream.
01:32:44
And so I'm just going to say, switch to wherever, show us the magic.
01:33:02
Three hours ago, I got a call. Hello and welcome from the last point of the infrastructure review and greetings from Karlsruhe. So three hours ago, I got a call from Lindrom and he asked me, how is it with this last talk we have?
01:33:20
It may be a bit complicated. And he told me, okay, we have a speaker, I'm the herald. Oh, as always. And then we realized, yeah, we don't have only one speaker, we have 24. And for that, we called Kaos West and built up an infrastructure which Danf Katze will explain you now
01:33:41
in a short minute, I think so. Thank you. Yes. Oh, I lost the sticker. Okay. After we called Kaos West, we came up with this monstrosity of the video. Cluster. And we start here.
01:34:02
The teams streamed via OBS Ninja onto three Kaos West studios. They were brought together via RTMP on our Mix One local studio.
01:34:23
And then we pumped that into Mix Two, which pumped it further to the work. The slides were brought in via another OBS Ninja directly onto Mix Two. They came from Lindworm. Also the closing you will see shortly, hopefully,
01:34:43
will also come from there. And Yusuf and Lindworm were directly connected via OBS Ninja onto our Mix One computer. And Mix Two also has the studio camera you're watching right now.
01:35:02
And for the backend communication, we had a mumble connected with our audio matrix. And Lindworm, Yusuf and the teams and we in the studio locally could all talk together. And now back to the closing with,
01:35:22
no, to the Herald News show, I think. Lindworm will introduce it to you. Lindworm is live.
01:35:43
Is Yusuf still there or do you come with me? So it will take a second of years. So thank you very much for this review. It was as chaotic as the cold Congress.