We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

33C3 Infrastructure Review

00:00

Formale Metadaten

Titel
33C3 Infrastructure Review
Untertitel
The usual extremely factual look behind the scenes of this event
Serientitel
Anzahl der Teile
147
Autor
et al.
Lizenz
CC-Namensnennung 4.0 International:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
NOC, POC, VOC and QOC show interesting facts and figures as an excuse to present all the mischief they’ve been up to this year.
Schlagwörter
3
Vorschaubild
1:02:22
17
31
Vorschaubild
23:26
37
48
98
147
Demoszene <Programmierung>EreignishorizontEreignishorizontMetropolitan area networkMereologieMultiplikationsoperatorWeb SiteFrequenzComputeranimationBesprechung/InterviewVorlesung/Konferenz
FrequenzPhysikalisches SystemMAPMereologieGruppenoperationVorlesung/Konferenz
Keller <Informatik>SoftwareOpen SourceArchitektur <Informatik>ServerThreadHardwareROM <Informatik>Offene MengeSoftwareUnordnungGruppenoperationOpen SourceEreignishorizontForcingKartesische KoordinatenMinkowski-MetrikServerVerschiebungsoperatorOffene MengeDatenreplikationMultiplikationsoperatorCoprozessorUnrundheitSpeicherabzugComputerarchitekturKartesische AbgeschlossenheitHalbleiterspeicherHardwareFrequenzVorlesung/KonferenzBesprechung/InterviewComputeranimation
Offene MengeFehlermeldungWarteschlangeProxy ServerReverse EngineeringMietserverServerServerLastFehlermeldungOvalZweiMessage-PassingFrequenzWarteschlangentheoriePhysikalisches SystemReverse EngineeringWeb-SeiteZahlenbereichProxy ServerUnrundheitWarteschlangeOrtsoperatorReelle ZahlDämpfungComputeranimation
Cross-site scriptingFehlermeldungOffene MengeE-MailKette <Mathematik>GMXKartesische AbgeschlossenheitKonfigurationsraumZweiOrtsoperatorSnake <Bildverarbeitung>WarteschlangeWeb-SeiteMultiplikationsoperatorUnrundheitZehnNatürliche SpracheElektronische PublikationFigurierte ZahlWeb SiteE-MailZahlenbereichPhysikalisches SystemBitrateWhiteboardStellenringAbstimmung <Frequenz>PunktZellularer AutomatKette <Mathematik>Prozess <Informatik>Minkowski-MetrikQuick-SortVideokonferenzWarteschlangentheorieBildgebendes VerfahrenEreignishorizontStatistikMittelwertSoftwareFehlermeldungServerPhasenumwandlungDomain <Netzwerk>HydrostatikProgrammfehlerHackerSpieltheorieVorlesung/KonferenzComputeranimation
Keller <Informatik>SoftwareOpen SourceRechter WinkelMultiplikationsoperatorPhysikalisches SystemSoftwareRegulärer GraphZahlenbereichVersionsverwaltungTwitter <Softwareplattform>Open SourceEreignishorizontOffice-PaketFormation <Mathematik>Vorlesung/KonferenzJSON
Ego-ShooterTransaktionEreignishorizontDifferenteBitFormation <Mathematik>TransaktionWarteschlangeLokales MinimumZweiMultiplikationsoperatorQR-CodeE-MailWeb SiteGebäude <Mathematik>PermanenteAdressraumOffene MengeVorlesung/KonferenzComputeranimation
Innerer PunktMultiplikationsoperatorTelekommunikationPROMGewicht <Ausgleichsrechnung>SoftwareVorlesung/Konferenz
QuellcodeGeradeBildverstehenSoftwareQuaderFacebookPeer-to-Peer-NetzEinfach zusammenhängender RaumInternetworking
PhasenumwandlungBildschirmsymbolServerPortscannerEinfach zusammenhängender RaumService providerErneuerungstheorieSkalierbarkeitStellenringSoftwareQuellcodeResultanteSummengleichungGeradeDistributionenraumVerschlingungBesprechung/InterviewTechnische Zeichnung
ServerWendepunktGeradeUrbild <Mathematik>VerschlingungPunktSoftwareGewicht <Ausgleichsrechnung>RechenzentrumInverser LimesDigitale PhotographieSpeicherabzugGebäude <Mathematik>Güte der AnpassungRelationentheorieBandmatrixLeistung <Physik>RouterUnimodale VerteilungEinfach zusammenhängender RaumWort <Informatik>LOLA <Programm>MultiplikationsoperatorDiagrammVorlesung/KonferenzDiagramm
CAN-BusWurm <Informatik>ServerE-LearningUrbild <Mathematik>Gebäude <Mathematik>Ordnung <Mathematik>SoftwareMereologieGeradeBesprechung/Interview
Wechselseitige InformationPunktNotebook-ComputerUrbild <Mathematik>Einfach zusammenhängender RaumTabelleEnergiedichtePunktSoftwareBesprechung/InterviewXML
Hill-DifferentialgleichungZahlenbereichTotal <Mathematik>LOLA <Programm>KanalkapazitätBitrateAssemblerSpeicherabzugTelekommunikationPunktCASE <Informatik>EinfügungsdämpfungEinfach zusammenhängender RaumHilfesystemKartesische KoordinatenRechter WinkelFormation <Mathematik>MultiplikationsoperatorDiagrammTotal <Mathematik>KanalkapazitätMultigraphComputeranimation
PortscannerHilfesystemBitSoundverarbeitungMultiplikationsoperatorCoxeter-GruppeInformationsspeicherungHook <Programmierung>Vorlesung/KonferenzDiagrammBesprechung/Interview
Drahtloses lokales NetzDienst <Informatik>DifferenteStatistikMomentenproblemMechanismus-Design-TheoriePunktGesetz <Physik>URLEreignishorizontAdressraumHackerVorlesung/KonferenzBesprechung/Interview
ZehnCAN-BusWechselseitige InformationSchießverfahrenMultiplikationsoperatorPunktInternetworkingAdressraumMetrisches SystemSystemaufrufInhalt <Mathematik>MultigraphGamecontrollerSoftwareTotal <Mathematik>CADRechter WinkelPhysikalisches SystemZahlenbereichBandmatrixDoS-AttackeFestplattenrekorderE-MailStreaming <Kommunikationstechnik>Datensatzp-BlockGruppenoperationExogene VariableTropfenStatistikComputerunterstützte ÜbersetzungBesprechung/Interview
SocketVorlesung/Konferenz
RechnernetzProzess <Informatik>Vorlesung/KonferenzComputeranimation
Globale OptimierungSichtenkonzeptBildschirmfensterPhysikalisches SystemDatensichtgerätMenütechnikMultiplikationInformationMedianwertVideokonferenzServerCDN-NetzwerkYouTubeComputeranimationVorlesung/Konferenz
SichtenkonzeptRechenschieberKartesische AbgeschlossenheitOffene MengeMehrrechnersystemVorlesung/KonferenzComputeranimation
AggregatzustandBitOffene MengeMultiplikationsoperatorComputeranimationBesprechung/Interview
MultiplikationsoperatorRechenschieberOffene MengeZweiVorlesung/KonferenzBesprechung/InterviewComputeranimation
MenütechnikDualitätstheorieSpezialrechnerDatensichtgerätBildschirmfensterSkalarproduktSichtenkonzeptPhysikalisches SystemVideokonferenzBrennen <Datenverarbeitung>RauschenDifferenteComputeranimationVorlesung/Konferenz
MenütechnikRechnernetzSichtenkonzeptPhysikalisches SystemKonfiguration <Informatik>Web-SeiteWort <Informatik>Vorlesung/KonferenzBesprechung/Interview
MultiplikationsoperatorQuaderVorlesung/KonferenzBesprechung/Interview
RechnernetzFormation <Mathematik>SoftwaretestPunktspektrumLOLA <Programm>TelekommunikationSimulationPlastikkarteStatistikGSM-Software-Management AGPunktspektrumGewicht <Ausgleichsrechnung>SoftwaretestGrenzschichtablösungSoftwareQuelle <Physik>HilfesystemNichtlinearer OperatorFrequenzVorlesung/KonferenzComputeranimation
Formation <Mathematik>PunktspektrumSoftwaretestLOLA <Programm>TelekommunikationSimulationPlastikkarteMultiplikationsoperatorLOLA <Programm>SoftwareURLStabPlastikkarteTotal <Mathematik>ComputeranimationVorlesung/Konferenz
Formation <Mathematik>GasströmungKanalkapazitätDifferenteFormation <Mathematik>MultiplikationsoperatorBitDynamisches SystemKonfigurationsraumFrequenzSystemaufrufKanalkapazitätZahlenbereichPunktspektrumComputeranimationVorlesung/Konferenz
SimulationPlastikkarteSystemaufrufEreignishorizontPlastikkarteSimulationSystemaufrufDifferenteSoftwareZahlenbereichBitKanalkapazitätMultiplikationsoperatorMessage-PassingComputeranimationVorlesung/Konferenz
Message-PassingZeiger <Informatik>RechenwerkQuaderProgrammfehlerFehlermeldungGamecontrollerMessage-PassingWorkstation <Musikinstrument>Prozess <Informatik>BAYESVorlesung/KonferenzComputeranimation
Nichtlinearer OperatorSoftwareFrequenzMultiplikationsoperatorAutomatische HandlungsplanungWarteschlangeHilfesystemVorlesung/Konferenz
EreignishorizontOvalSoftwareOffene MengeMorphingMultiplikationsoperatorTranslation <Mathematik>Surreale ZahlOffenes KommunikationssystemComputeranimationVorlesung/KonferenzBesprechung/Interview
RechnernetzCDN-NetzwerkClientMultiplikationStreaming <Kommunikationstechnik>DateiformatDylan <Programmiersprache>Anpassung <Mathematik>Patch <Software>MaßerweiterungNatürliche ZahlTouchscreenMultiplikationsoperatorFormale SpracheOrdnung <Mathematik>VerschlingungVideokonferenzMaßerweiterungSoftwaretestStapeldateiPatch <Software>Programm/QuellcodeComputeranimationJSON
VideokonferenzKanalkapazitätPatch <Software>MultiplikationsoperatorKanalkapazitätStreaming <Kommunikationstechnik>RuhmasseStatistikViewerBandmatrixVorlesung/KonferenzComputeranimation
Gebäude <Mathematik>InformationVideokonferenzBildschirmsymbolOvalVektorrechnungFlash-SpeicherEreignishorizontMaschinenschreibenMultiplikationsoperatorEliminationsverfahrenVideokonferenzSoftwareFlash-SpeicherStreaming <Kommunikationstechnik>Dienst <Informatik>BenutzerschnittstellenverwaltungssystemVorlesung/Konferenz
BrowserStreaming <Kommunikationstechnik>Dienst <Informatik>Flash-SpeicherMixed RealitySchreib-Lese-KopfVektorpotenzialMultiplikationsoperatorGrenzschichtablösungSelbst organisierendes SystemEreignishorizontVorlesung/KonferenzComputeranimation
OvalGeradeVollständiger VerbandVerschiebungsoperatorVideokonferenzSchedulingMultiplikationsoperatorZahlenbereichHilfesystemFormale SpracheRobotikComputerspielBesprechung/InterviewVorlesung/Konferenz
Machsches PrinzipHypermediaInterface <Schaltung>RoboterPhysikalisches SystemMinimumWeb logYouTubeXMLJSONVorlesung/Konferenz
HypermediaKartesische AbgeschlossenheitDialektWeg <Topologie>InformationVerschlingungLeistung <Physik>EreignishorizontLesezeichen <Internet>HilfesystemElektronische PublikationBaum <Mathematik>SichtenkonzeptGraphische BenutzeroberflächeWurm <Informatik>MarketinginformationssystemZoomSimulationVideokonferenzServerCDN-NetzwerkRechenschieberSystemprogrammierungMedianwertUnordnungComputerspielCoxeter-GruppeSichtenkonzeptMAPJSONComputeranimationVorlesung/Konferenz
System-on-ChipBitSpezielle unitäre GruppeDatenfeldMultiplikationsoperatorKugelkappeGebäude <Mathematik>Schreiben <Datenverarbeitung>Installation <Informatik>RouterHackerRoutingWeb SiteMaterialisation <Physik>FrequenzVorlesung/KonferenzBesprechung/InterviewComputeranimation
Wurm <Informatik>Ordnung <Mathematik>Spektrum <Mathematik>CASE <Informatik>Leistung <Physik>PortscannerFrequenzMetrisches SystemMereologieVorlesung/KonferenzComputeranimation
Data Matrix CodeBenutzeroberflächeGarbentheorieTypentheorieZahlenbereichDatensichtgerätBus <Informatik>MereologieComputerarchitekturBenutzerschnittstellenverwaltungssystemMultiplikationsoperatorSichtenkonzeptVorlesung/KonferenzComputeranimation
Workstation <Musikinstrument>RouterZahlenbereichZahlenbereichFächer <Mathematik>Bus <Informatik>Workstation <Musikinstrument>Dienst <Informatik>WürfelVerschiebungsoperatorRouterBitOrdnung <Mathematik>HardwareMeterDatensatzFuzzy-LogikMikrocontrollerMultiplikationsoperatorVorlesung/Konferenz
HardwareSoftwareMultiplikationsoperatorMereologieSoftwaretestReelle ZahlGrenzschichtablösungKontrollstrukturGamecontrollerMikrocontrollerBitZusammenhängender GraphStatistikDatenfeldMeterIntegralBus <Informatik>Web SiteTopologieKugelkappeLoopInformationsspeicherungPunktBenutzerschnittstellenverwaltungssystemSchätzfunktionAbgeschlossene MengeAbstandTeilbarkeitHilfesystemUmwandlungsenthalpieRechter WinkelProzessautomationVorlesung/Konferenz
MarketinginformationssystemMailing-ListeUnordnungEreignishorizontMultiplikationsoperatorPunktHardwareAbgeschlossene MengeHilfesystemCASE <Informatik>CodeE-MailMailing-ListeTwitter <Softwareplattform>Computeranimation
MedianwertHypermediaKartesische AbgeschlossenheitCASE <Informatik>Vorlesung/KonferenzJSON
Transkript: Englisch(automatisch erzeugt)
So yeah, infrastructure review of the 3033. Please welcome them with a lot of applause.
Hello everyone. This is Ricks. My name is Rami. We are the technical part of both the pre-sale and the cash desk team, and we want to show you some interesting things
about what we've done during this event and before this event. We will start with the pre-sale. As you know, all of you have got your tickets before the event. We sold no tickets on site this year, so the pre-sale was a very important thing after we knew that the event will sell out very fast
based on the experience from last year. We had, as you probably noticed, a two-stage pre-sale period that we split into a voucher stage and into an open sale. We implemented this voucher system
because we wanted to enable all those people who make Congress possible in the first place to be here. This is, of course, the angels because we need all these angels to build this, but this is also all the other parts of the community that we need here to build this experience. Because we do not personally know all of them,
we implemented this replicating voucher system in a way that once you got a voucher and you paid for your ticket, you got another voucher to share with a friend from a different group, from the same group, from anywhere around the world. So we used the AirFuzz of the CCC, the Chaos Trefs and hackerspaces all over Europe
and outside Europe and other groups to spread those vouchers. On the software side, the pre-sale was running PreTix, an open source ticket sales application based on Python and Django that we originally designed for the smaller MRM CCC event.
And it is open source and on GitHub and it has a very flexible plugin-based architecture that enabled us to implement this voucher replication system. The initial hardware that we had it running on is one dedicated server
that has an eight core processor and 32 gigabytes of memory. After the voucher phase, which ended in late October, we went into the open sale in early November. And we split the open sale again into three rounds and issued the tickets on three different days to enable people who are working shifts or working late
or could not access it at some times of the day or some weekdays to enable fair chances for those people. And as you know, in the first period, in the first round all the tickets have been bought by toasters and one of the toasters came. Still, we're very sorry that you saw
a lot of error messages on that day. The load of the server was over then
400 HTTP requests per second over a period of a few hours even though the tickets of the first round were all gone in about 13 minutes. To work with that load, which increased in the coming rounds, we implemented a queue system
that we used to quickly handle that load without restructuring the whole system. We put a second dedicated server as a reverse proxy in front of the original one and used it to limit the number of people actually accessing the real ticket shop. If you went on the page on that day,
you were presented with a page that says join the queue, you press join the queue, you got a queue position like 360, 360 people are in front of you, the position went down over time and we let like a few tenths of people per second into the actual ticket shop to keep load manageable.
We implemented this queue inside Nginx using embedded Lua configuration and in the second round, you could place snake on the waiting page and if you wanna embed that snake game to somewhere else, it's on GitHub as well.
And also, we're very thankful that the VOC, the video team offered us to share some server resources and they hosted the static files, CSS, JavaScript and images for us and that is the number for one of the days, I guess. It's 20 gigabytes just of that.
In the second round, we peaked at more than a thousand requests per second, which we handled quite well. There were very low error rates on the queue page and once you got through the end of the queue, you had a very errorless experience in the actual shop.
In the third round, we peaked at 3,000 requests per second and unfortunately, we had a bug in the queue software that led to nobody actually getting a queue position. We had to restart the queue system at 10.15.
Nobody had bought a ticket at that time. We announced it like 10 minutes early, the original time was 10 a.m. and from then on, it went like the second sale with a very low error rates. We got some figures from the pre-sale for you that we want to share that might be entertaining. The first one was not that entertaining for us.
We got 1,763 support emails from you that we worked through with four volunteers on the team. I wanna express a special thanks to Martin who is not even here.
Those vouchers replicated and the longest voucher chain that you built was 15 vouchers long and we think that is quite a lot taking into account how long the voucher phase was. And the next is something that we find very interesting.
The average payment time it took for a ticket to be paid was 8.6 days but if you only look at the ticket spot with a voucher, without a voucher, it's 9.2 days and if you only look at those with a voucher, it's 6.8 days.
So that replicated system was very useful to speed up the payment processing. We are quite disappointed by the next statistics. 20% of you use Gmail to sign up for your ticket.
We expect that to get better next year. The second place, the third place is taken by Posteo.
We have another one. And with about 2%, we have domains of CCCDE, subdomains of local hacker spaces and stuff like that. So I think we can improve here. So I'm handing over to Riggs for the cash desk on site. Right, thanks.
Right, because just implementing a new pre-sale shop and fancy replicating system was not quite enough for us and we were a bit bored. We decided to rework the cash desk software itself
which seemed like a good idea at the time. We called the system C6SH. Obviously the old system was C4SH and version numbers go from four to six regularly. It's also based on Python and Django and we will open source it in January
once we remove the last ugly event hacks, hopefully. On GitHub and I'm sure the nice people at the relevant Twitter accounts will publish that fact in time. The software we implemented does the software
that runs actually on the cash desk that you see when you enter the Congress Center and it also handles back office stuff like figuring out how many wristbands were given out and if the money in the cash desk is actually kind of correct or in the right ballpark.
We handled this event with those five cash desks as every year we had 22 different cash desk angels. It's a bit lower than last year because we actually didn't have that much work this year and we had eight troubleshooters, the nice people on the side who were able to help you
if you lost your ticket, forgot your email address, stuff like that, don't ask. We actually peaked at 27 transactions per minute on day one at about I think 10 a.m. Yeah or noon.
So that means about five transactions per minute per cash desk, so a transaction every 12 seconds which is really, really good. For reference, last year it was 20 transactions per minute
which is also really good considering that last year we actually sold tickets on site so there was cash handling involved which takes a bit longer. And the maximum waiting time in the queue in the building this year was five minutes.
Again, due to us not handling any sales on site so everything we had to do regularly was scan the QR code and give you your wristband and off you go. That was about 17 minutes last year which is also really good for handling money.
The 17 minutes is only considering the time and the active queue so if you arrive two hours early to get a day pass, you waited for two hours until we opened. Right, that's about it from us and thank you for your interest.
We'll see you next time and I'm handing over to Nock now.
Thank you, thank you, and a huge welcome to the network review of the 33rd Chaos Communication Congress. But unfortunately, we have to start with an apology.
We realized after last year's Congress that our network simply didn't deliver the Fritz Box experience that you deserved. Clearly, we have let you down. So for this Congress, we went back to the drawing board and didn't stop until we came up
with something radically new, something that will profoundly change the networking experience of the demanding conference attendees, something in line with our vision.
We realized that networks are the most frequently used internet infrastructure at Congress. After all, we expect our networks to keep us connected to the Facebooks and yahoos of our peers. We know that everyone can provide you with a connection to your daily digital world
but we are striving for more. As the leading internet provider within the CCC ecosystem, we want to provide you with a remarkable, easy to use and highly scalable network
from renewable local sources. And we failed to deliver this last year. And here is why. This is the network design we used last year. As you can see, the network is clearly unbalanced. There are crooked lines in there, crooked links. Overall, it's not very nice to look at.
The result, of course, was a disappointing experience for you, the user. So what we have done this year is we have planned the exact same network but with a new drawing tool.
Look at those beautiful orange and blue lines with pleasingly angled links and naturally balanced distribution points. It is amazingly simple and just beautiful to look at.
But we didn't stop there. Ever since we started celebrating the network built up in Hamburg, we always had a dark fiber from our data center IPHH here into the CCH. But this year, our incredibly professional and amazing engineers found a completely new way
to light this fiber. As you can see in this diagram, the fiber is now blue and slightly thicker than last time, which makes it a lot better, of course. And also, it now runs 100 gigabit ethernet, so that's good. But that still wasn't enough for us. Deutsche Telekom upped their sponsoring
and enabled vectoring on their fiber. So they were able to provide us with an additional 100 gig redundant fiber uplink connection into this building. With this simple and innovative up-ring redundancy,
we knew we needed to redesign the whole core network infrastructure from the ground up. This year, we distributed the available bandwidth over not one, but two powerful carrier-grade core routers. But with this groundbreaking approach,
we suddenly realized that we exceeded the physical limitations of the building's fiber infrastructure. In other words, we only had multimode fiber available, which is orange, simple photo. But for 100 gigabit ethernet,
we needed single-mode fiber, which is yellow. Luckily, during a week-long team outing of our dedicated engineering department, we found a brilliant solution for this complicated issue. We pulled not one, but two new single-mode fibers
through the building in order to provide you the network experience that you deserve. Now, with this innovative and strong backbone in place, we identified the second problem in our previous network design. Some of our users could only connect their equipment
to a measly fast ethernet port. In line with our vision, we can now provide gigabit to all users on all access switches. But there's one more thing.
In some places, we were even able to uplink the switches with 10 gigabit fiber connection to the table. And as you can see here, all switches were carefully pre-staged by our safety-aware engineering team. After having solved all those annoying cable issues,
we felt that we were held back by that antiquated concept of wire cabling, and decided that wireless is the future. So from this point on forward, we put all our energy into reinventing the wireless experience for the advanced Congress attendee.
We felt that the old approach of suspending access points from the ceiling was keeping the network too far away from you, the user. So this year, we were finally bringing the network closer to you. In lecture halls one and two, you will now find access points under your seats.
But don't worry, in the unlikely case of the loss of connectivity, network cables would automatically be deployed from the ceiling.
So to summarize our efforts, we had an 180 gig of total uplink capacity. The peak uplink usage was over 30 gigabits per second,
both in and out. We had almost 8,000 Wi-Fi users at peak times, and almost 80% of them were at the five gigahertz band. We have 189 access points deployed, 121 access switches with a combined 6,160 ports, and clearly all graphs went up and to the right.
So, in conclusion, we have terribly failed you last year. However, through teamwork and dedication, with the help of an amazing NOC team, a helpful help desk,
and with the support of all these lovely companies, we have managed to deliver. Because, as you can see in this scientific diagram, it went up to 11. Thank you.
So we realized that we didn't actually have any facts in this talk. So we have a bit of time for a few questions. Apparently we answered all the questions,
which is quite good. There's one question there. Sorry? Please repeat the question. So the question was, which was the new drawing tool we've used, and it's called Inkscape.
Realized that the two of you will have to give this presentation from now on in perpetuity, right? You're on the hook now.
Well done. Thank you, Niels. We had Wi-Fi passwords, yes? So the question was, I assume the question was what were the most popular Wi-Fi passwords?
And we did that statistic, it was about the same as last year. So we figured it's not funny anymore. But yeah, that's the same thing. 33C3, FUBAR, and blah. Yes? Hi, was it because of you that the CCH was for a moment
located in London or Dublin? So the issue with geolocation is that there are a lot of different services and some of these services work by looking at the MAC address of the wireless access point. And we use these wireless access points at many different events.
For example, at this summer's hacker camp in the UK, at EMF camp, and also previous other events, and so yeah. Oh, so it's not a feature, it's a bug. Yeah, that's a bug, but it's hard to solve because these systems are self-learning, but it takes a while. So yeah, you will get located in Hamburg at some point,
and also the AP addresses are used everywhere. So yeah, it's unfortunately nothing we can fix. As an additional feature, the next time you will be at the conference where those access points are used, you will be in Hamburg. So there's that.
Yes? Okay, so people on the internet are wondering since Juniper dropped you last year, where did you get the network equipment this year? Juniper decided to not let us down again and they have supported us in a great way this year.
They gave us all the 100 gig equipment we could ask for and I think we got 1.2 metric tons of equipment again, so that was really smooth. How many men we've got into CAD content?
Well, as you know, we do not do deep packet inspections, but I assume all of the content is CAD content. So I think we have time for one more question if there is one. Sorry?
Yeah, what were the IPv6 statistics? I assume it's 5,000% or something because at one point our monitoring broke, but what was it? I think the address is for four times as long as IPv4. Yes, that. That's a statistic. That number is on the public dashboard. So it's dashboard.congress.cccde.
You should be able to find it there. We didn't include all those graphs because we figured we should use the time for doing a talk without any content and you have these graphs anyway. Yes. Did you get any abuse complaints? So, well, there's the usual amount of automated emails.
There were a few that weren't automated and some were actually about something serious, but it was less than last year. I think there were only three calls in total. So yeah, you guys obviously behaved better. All right. I think that... Just one. Okay, one more question.
The question was how many access points crashed and I don't believe that any access points crashed. Is that correct? Well, yeah, apparently, well, the access points. We had a few issues with the controller at one point, but that was worked with, so.
All right. Yeah, there was one guy standing. One very last question. How many DDoS attacks did you have? Outgoing one. Incoming.
Which actually is quite annoying because you really should use the bandwidth for better things than shoot other people on the internet with packets. That's just silly. Yeah, we don't endorse this kind of behavior. We think it's idiotic and it's similar to running around breaking infrastructure and toilets
or something that's just stupid. So please don't do that. Okay. All right, we'll hand it over to the VOC now. The lovely people who brought you all the streams and the video recordings. Okay, okay.
Is Blake here? In, folks, right?
Yeah, he's here. He's here. He fixed the door. He fixed the door. Yeah. He fixed it. He fixed it.
So it looks like nobody has used this HDMI socket before.
Oh, come on, they're all doing an awesome job. Just bear with them.
I have 24 year doctor here.
So after some little difficulties, let's start.
So my name is Andy. I'm from CCC VOC and... I'm Jenny, I'm also from CCC VOC and welcome to the infrastructure view of the VOC. So this time, so opening actually worked and we will also see some slides in the future.
So it looks like while our technical stuff
worked pretty nice this Congress, now it fails us.
So while we wait for the slides, I can give you some facts. This time we actually managed to get the opening working right from the start with audio and subtitles.
Maybe we can see the slide in a few seconds for it.
This is the most mirroring video. I'm sorry, it's so difficult to do.
I'm sorry that it's so fast. It's too fast. I can't do this alone, but if I have a couple, I can do it.
Hello, page. Hi. What's going on? I'm sorry, it's so difficult to do. I'm sorry, it's so difficult to do.
I'm sorry, it's so difficult to do. I'm sorry, I'm sorry, it's so difficult to do. It's not working. Yes, I'm sorry. I'm sorry, it's so difficult to do. Okay, and you're going to continue. Yes.
Okay, so... I'm sorry, it's so difficult to do. No, I'm sorry. So, I'm sorry, it's so difficult to do. I'm sorry, it's so difficult to do. Oh, it's okay. Can you open it? Yes.
I'm sorry, it's so difficult to do. Can you open it? Yes, I can open it. I'm sorry, it's so difficult to do.
I'm sorry, it's so difficult to do. I'm sorry, it's so difficult to do. The box is coming for me. I'm sorry, it's so difficult to do. So... I'm sorry, it's so difficult to do.
All right, so I guess I'm jumping in with the GSM, some GSM stats.
Meanwhile, so we have a sad GSM spectrum situation we've run a test network that was very helpful for several years at Congress.
And last year, no, actually, yeah, last year in spring, the Bundesnetz Agentur, they gave away the last three frequencies. So it's not really possible to apply for a test network license the way we've done before. Last year, we got some help by the operator
who acquired this new spectrum, but this year, they were using that spectrum. And it was looking pretty bad for a long time. We didn't know if we were gonna be able to have a network at all. In the end, we actually did get to loan
five ARFKans from Deutsche Telekom, thanks a lot. I don't know what it took to replan the cell phone network in downtown Hamburg at Christmas to do that, but pretty cool.
Unfortunately, it was too late to print any SIM cards with this year's theme. So yeah, sorry about that. And it was also so late that we had not a lot of staff working on this. But we managed to set up even more BTSs than last year.
We had nine total this year in these locations. Six of them were BTSs, three of them IP access nano BTSs. We'll get back to that in a little bit. And all of this was running in the 18 megahertz band. So we did some careful experiments.
Last year, we started using GPRS on one of the eight time slots in each BTS. And this year, we set it up a little bit differently. We activated it yesterday and with dynamic time slot assignment. So, but however, only on the simple BTSs,
we'll get back to that in a bit. The configuration was such that depending on whether you were doing calls or the users, you guys were doing calls or wanting to do GPRS packet traffic, the frequencies get allocated differently. So previously, last year, we had a fixed configuration where one eighth of the available frequency spectrum
was used for packet and GPRS. This time, more dynamic, which meant that we had a lot more possible capacity for GPRS, but it ended up not being used so much. We'll get to the numbers right now.
We had subscribers signed on SIM cards, 3,750, so including SIM cards from lots of old events. And sadly, only 300 new SIM cards that we had left over from last year.
About 1,200 created calls and 400 established calls. So the difference is creating a call is when you dial and for the established call is someone actually picking up and there can be lots of reasons for a call not getting established.
The phone you're calling is just not online. There are no free channels, et cetera, et cetera. Text messages, couple of thousand, 3,200 sent, 2,800 roughly delivered. That's pretty good, not many that didn't arrive
at their destination. And the GPRS numbers are quite a bit lower compared to last year, even though we had six times as much potential capacity allocated or dynamically allocated.
About a third of down, so actually, I guess this should be swapped. Well, receive on the network side this is. Third of the bytes received and a fifth of the bytes transmitted. So you were better at using GPRS last year.
Then we have some fun IP access bugs. So this is the NanoBTS, the three units that I mentioned. They're not completely stable. So especially when we turn on GPRS, they're not completely stable. You know, I don't know this.
So these are some error messages that they send out to the OpenBSC base station controller that we're using. So plain text error messages and I don't know, you know, if someone knows like C or C++ or Java, it's an assert checking that this Q allocated thing
is either allocate magic or not allocate magic. So the Q is, I guess, not allocated but it's also not allocated. You know, I don't know what they've done there. I wanna say thanks to the heroes
that were running this network because we got the frequency so late. Many of the people who have been helping before, they weren't so excited about joining or they had already made other plans. They did help with setup and teardown but essentially, the operation of this network
was done by three people, two of which who were doing this for the very first time this year. And I wanna say thank you to you guys who were using this network because it helps a lot to find issues
and improve the Osmocom software OpenBSC. Thanks.
So in the end, we got it working. Looks like Morphe got us in the end. Okay, like you already told, the opening system was with surreal audio,
with translations, with subtitles and everything. So great. So this year's streaming CDN, we had 17 edge relays, three in-house relays. All the Deutsche Telekom customers were directly fed from the 31st C3 and we delivered over 130 terabytes.
So this time, we decided to support an additional language, translated language in hall two and one with several codecs and with the standard syndrome and so on.
We got very, very many video feeds which we all had to mirror and so on, yeah, you know. Also this time for the first year, we used MPEG-DASH. It was only in a beta test. It worked starting day two.
And there were some FFmpeg extensions and the patches are incoming. So actually about, but only about 30 people per day used it, so yeah. But it's, now we know how to do it and great.
We used 10GE everything, everywhere at this time. So basically we had four 10GE boxes, one 40GE, but we didn't use it to full capacity. So please watch my streams. Or we will start producing in 4K.
So here are our statistics for the stream viewers. Here we have the three peaks, which is FIFA, mid-toolage incorrect and the CCC Jada Zwittbrück.
So actually more people watched FIFA Jada Zwittbrück, FIFA, big FIFA, then mint correct. But mint correct had higher bandwidth because there was more movement and yeah. More stuff lying around.
So we also, at our fifth room, at Centered Syndrome we tested our new tele intercom setup based on a Raspberry Pi. So everything with ethernet and yeah, worked quite good. And we now have also intercoms for our decentralized events, yeah, great.
Also this time we, for the first time, we used Walk2Mix, which is the software video mixer we developed under Congress. It was used in hall six and hall G. And it's on GitHub, so please use it.
And yeah, we are also to finally eliminate Flash. So for the time between the live stream,
the start of the live stream and the publishing of the finalized recording, or the first finalized recording, we have the service called Relive, where you can, it's basically a stream dump, but provided by us. And for some browsers we had to use Flash there because Halo has playlists and so on, and natively anywhere.
And also on day two there was a nice guy which said hey, here is this JavaScript HLS player, why don't you use it? And our streaming team said yeah, great. Also this year we, for the first time, had an assembly where you could learn
about how C3 Walk operates, about how to use and set up Walk2Mix, and how to build your own C3 Walk, basically, for small events and where we can't go there. So we had several sessions self-organized about Walk2Mix, also the HDMI guy had a session there
and so on, so we will continue this thing and it was nice to talk to you. We had an overwhelming interest from the angels,
actually in the first introduction meeting for the video angels, we basically happily filled up this hall, there were over 250 angels, and there were about 12 hours work angel shifts per talk. So each talk, each session in the schedule had a work of 12 hours behind it.
Like last year, in hall two and hall one, we had subtitles in, yeah, live subtitles, generated by angels inside of the room,
it was about 80, you can basically read the numbers yourself. We also added subtitles for this room, for my total incorrect, and yeah, you see the numbers there, and actually I think this 507 strokes per minute is quite good.
But there is more help needed because not all of the live subtitles get reused for the releases, so we need your help to subtitle all the talks with all the languages, and if you want to help, go to c3subtitles.de
where you can see how to subtitle the talks. Yeah, really do that, please. So here's your screenshot of the interface so angels used to generate the subtitles, yeah.
There were six subtitle angels working simultaneously from incorrect. Oh, yeah, nearly last slide. So I don't know if you're aware of this Markov bot which generates random treats, and we find it quite nice. So all talks will go to media.ccd,
and the YouTube channel with the same name, please don't use the other ones. If you subscribe to it, please unsubscribe, and so on, you know it. Just read the blog post from last year about this YouTube problem. And we also have to thank our sponsors because, yeah.
So, any questions?
Great, so who's next? And by the way, so this was a live presentation with Wi-Fi, what's a Wi-Fi?
Which then we came on stage, and now we're using OMTS, and this cable is not connected, why? Oh, nice, the microphone works. It's a good thing.
Ah, that's my slide. Little bit off center, but I think that's fine. Better than no slide at all. So, I'm Sebastian from the Seidenstrasse team, and this year we also decided that we need to improve your user experience. Because last year, last year we had the field telephones,
and you had to call when you wanted to route the capsule and you had to do most of the routing by hand, and to be honest, that's in hacker congress. We don't call people, we press buttons. So what we did, we added more auto routers.
We had three of those routers. We had them spread out all over the building. This was also one of the largest installations we did so far, because hey, it's the last time in Hamburg, and we know the building, we know all the secret places where we can sneak the pipes through, and we thought, okay, for the last time, let's use all of it.
And it was really ambitious, and we had one auto router when we started to set up here, and all the other auto routers were basically built on site with materials that were available. Next thing we needed, we need to get across
these huge fire doors. So basically in the elevators, the elevators, yeah, you know the escalators, the escalators is the word, the escalators leading from the Senderzentrum up into hall one. There's this huge fire door that's retracted in the floor, and we are basically not allowed to cross there
with our pipe, because since you can, the door cannot cut the pipe or something like that, and then the door is blocked, and in case of fire, we have a problem. So we decided to build an emergency disconnect, which worked electromagnetically. So in the normal case, the electromagnets had power,
and the tube was held in place by electromagnets, and as soon as the power fails, for example, because the door opens and triggers our switch, the electromagnets turn off, and the tube comes crashing down, and hopefully hurts no one.
And since we had already automatic routers, some people decided we need also an automatic capsule scanner, so that you can send capsules via data matrix codes that you can stick to them. We did not use the scanner here, this year, it worked, but we had some trouble connecting it,
and I'm going to talk about that in a minute. The other thing we added was we had a new user interface. So instead of a few telephones, we wanted to provide you with a keypad. Just type the number where you want to send something to, press enter, wait until the display tells you to stick the capsule in the tube, suction starts, and everything is fine.
Yeah, so we fought. The problem was, I basically spent the last two months working on parts of the electronic, and there are a lot of custom bus transceivers in there, because we use our own bus architecture. And I worked really hard to finish them in time, and since I arrived pretty late here,
I decided to send them ahead using DHL. And somehow, DHL seems to be really afraid of us. They sabotage us. So the parcel with the bus transceivers, there were like 20 bus transceivers, and I had 10 finished in advance, and sent them here. And yeah, the 10 bus transceivers arrived yesterday.
So next time, we better send it to Seidenstrasse. It's faster. So in the end, we had to work with what was available, which were like six or seven bus transceivers, which I kept back, because they didn't work
when I packed the packet, and I had to fix them, so I spent the morning of Christmas soldering in my basement, which was fine by me. I'm not a huge Christmas fan anyway.
So let's talk about some numbers. So we had 10 stations. Each station needed a transceiver, so we didn't have enough transceivers for all our stations. First problem. Next problem is each router also needs a transceiver. So we were another three transceivers short. We also used 100 meters of cube, so one kilometer,
which is a bit more than last year, but I don't think it's the most cube we used. We are not quite sure about the numbers, because we basically counted the leftover rows of cube, and then we ask ourselves how much cube did we order in the beginning, and was there anything left,
and yeah, it's all a bit fuzzy. We also, since we had these huge hardware problems to start with, we worked basically night shift from day zero to day one, and from day one to day two to get anything at all going, and at least we had our automatic vacuum cleaners going
at that time. We had our automatic push-pull switch going. You also had one router going. The other had mechanical problems, and at that time, for some reason, several Arduino microcontrollers failed, and then we were left with not enough microcontrollers for all our stations, and it was 2016 all over again.
A lot of hardware that was really dear to us and important just died for no reason. So another important fact is we could not count how many capsules we routed, because I used the hardware that was meant for the statistics node for something more important.
So there's just three question marks. I think it must be something around 100 capsules from my guesstimation. Our signaling bus, which is becoming more and more important because we are going to automate the whole network again next time, and this time it was the first real test with many devices on the bus,
and it worked kind of well. There were some odd things in the beginning which we were able to fix, and we used about a kilometer of cable because everywhere we put a tube, we also needed to put a cable, and luckily we've got these real strong field telephone cables which we can use to tie the tube to the ceiling. So instead of ropes like last year,
we could use our cable to tie the tube up, which looks a bit dangerous and not really well thought at first sight, but this cable is really robust and it worked pretty well, so that's fine. Also it was the first time that we tested this bus with many transceivers that were far apart.
So we had up to 400 meters between transceivers and in the end it all worked out fine. We even had some of the keypads running, we had some automatic routing running. The automatic routing part was delayed mainly because our network control software couldn't be tested because we couldn't get the hardware on site in time to.
run some integration tests and you know how things go. You write a specification, somebody else writes another component according to that specification and in the end, everything breaks as soon as you connect it. So we were a bit unlucky this time. So in the end, we made it work for you just in time
so that we can tear it down again and that's an important point. We could use some help tearing everything down, putting everything back into storage. It would be really nice if some of you could spare some time after the closing event to help us tear down. That would be really cool and also we would like to improve this
because this year's user experience wasn't that good at all. So next year, we want to have everything running smoothly that we planned for this year. We've already distributed the hardware among us so there won't be the case that just because my packet is late,
the whole hardware is missing and also we need people that can code. We also might appreciate some people that help us with operations next year. So if you want to participate with Seidenstrasse, then just try our mailing list, follow us on Twitter, try IRC. We also have a GitHub account with all the code inside it
so if you want to have a look at the code and see if there's something you can improve, take a look there. We really appreciate it. So that's it from my side.
In case you want it, it seems that neither POG nor CERT are about to speak.