Infrastructure Review
This is a modal window.
Das Video konnte nicht geladen werden, da entweder ein Server- oder Netzwerkfehler auftrat oder das Format nicht unterstützt wird.
Formale Metadaten
Titel |
| |
Serientitel | ||
Anzahl der Teile | 85 | |
Autor | ||
Lizenz | CC-Namensnennung 3.0 Deutschland: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. | |
Identifikatoren | 10.5446/38084 (DOI) | |
Herausgeber | ||
Erscheinungsjahr | ||
Sprache |
Inhaltliche Metadaten
Fachgebiet | |
Genre |
5
6
11
12
13
14
16
17
23
24
25
27
29
30
31
32
35
39
41
42
43
44
45
46
47
49
50
51
52
55
56
58
59
60
63
65
66
67
69
72
73
74
76
79
81
82
84
00:00
UnordnungTelekommunikationElementare ZahlentheorieDatenmissbrauchFreewareDigitalfilterBildverstehenUrbild <Mathematik>ApproximationKeilförmige AnordnungPunktDrahtloses lokales NetzSpeicherabzugGebäude <Mathematik>Dienst <Informatik>Automatische HandlungsplanungGüte der AnpassungDigitale PhotographieAutomatische HandlungsplanungSoftwareEreignishorizontGewicht <Ausgleichsrechnung>URLWellenpaketHackerUrbild <Mathematik>DatenfeldWeg <Topologie>Drahtloses lokales NetzWeb SitePunktSpeicherabzugGeradeDifferenteEinflussgrößeMultiplikationsoperatorDienst <Informatik>Einfach zusammenhängender RaumAggregatzustandEinfügungsdämpfungBildverstehenSystemaufrufStochastische AbhängigkeitComputeranimationBesprechung/Interview
02:26
TelekommunikationUnordnungAutomatische HandlungsplanungUrbild <Mathematik>WellenpaketMeterTrennschärfe <Statistik>Geometrische FrustrationAutomatische HandlungsplanungInformationSoundverarbeitungURLAbgeschlossene MengeTabelleComputeranimation
03:43
UnordnungAutomatische HandlungsplanungWeb SiteRoutingGeradeUrbild <Mathematik>Leistung <Physik>SoftwareRechenzentrumDiagrammMereologieMehrkernprozessorÄhnlichkeitsgeometrieURLSoundverarbeitungFlussdiagramm
04:46
Urbild <Mathematik>TelekommunikationUnordnungDatenfeldMultiplikationsoperatorDatenstrukturUrbild <Mathematik>Netz <Graphische Darstellung>Coxeter-GruppeAlgebraisch abgeschlossener KörperDatensichtgerätRPCWeb SiteWellenlehreComputeranimation
05:43
Urbild <Mathematik>TelekommunikationUnordnungMultiplikationsoperatorMetropolitan area networkUrbild <Mathematik>Aggregatzustand
06:28
TelekommunikationUnordnungDatenfeldUrbild <Mathematik>Physikalisches SystemAssoziativgesetzSchnittmengeTopologie
07:20
SpeicherabzugAdressraumMinkowski-MetrikKartesische AbgeschlossenheitTelekommunikationUnordnungProzess <Informatik>MultiplikationsoperatorAdressraumDemoszene <Programmierung>Computeranimation
08:03
UnordnungTechnische OptikMultiplikationsoperatorDigitale PhotographieURLSpeicherabzugUrbild <Mathematik>Einfache GenauigkeitDatenfeldMateriewelleCodierungSoftwaretestVerschlingungGruppenoperationSystemaufrufHardwareComputeranimation
09:02
Dienst <Informatik>TelekommunikationUnordnungRouterWeg <Topologie>KanalkapazitätGamecontrollerClientRechenzentrumAutomatische HandlungsplanungRechenwerkZahlenbereichDienst <Informatik>MultiplikationsoperatorWellenlehreRadikal <Mathematik>AggregatzustandGamecontrollerVerfügbarkeitPunktMittelwertDualitätstheorieRobotikComputeranimation
09:59
Weg <Topologie>KanalkapazitätGamecontrollerClientTelekommunikationUnordnungDatenflussNachbarschaft <Mathematik>MeterPunktBitMehrfachzugriffGüte der AnpassungKanalkapazitätÜberlagerung <Mathematik>ClientMinimumComputersicherheitWeg <Topologie>Computeranimation
11:09
TelekommunikationUnordnungTypentheorieGraphische BenutzeroberflächeWindows MobileSoftwareEindeutigkeitRandomisierungOrtsoperatorTypentheoriePasswortComputeranimation
11:51
TypentheorieGraphische BenutzeroberflächeWindows MobileUnternehmensarchitekturSoftwareSmartphoneEin-AusgabeBildschirmfensterGüte der AnpassungBitHumanoider RoboterGenerator <Informatik>TypentheorieEinfügungsdämpfungAutomatische DifferentiationChiffrierungComputeranimationDiagramm
13:20
Weg <Topologie>MUDInformationZeiger <Informatik>AssoziativgesetzWeg <Topologie>GeradeZahlenbereichDatenfeldGraphLeistung <Physik>Generator <Informatik>GRASS <Programm>PlotterMereologiePunktNatürliche ZahlKurvenanpassungComputeranimation
14:58
MittelwertKanalkapazitätKontrollstrukturPhysikalischer EffektUnordnungMultiplikationsoperatorPunktSoftwaretestEindringerkennungGraphExogene VariableBitSoftwareFrequenzFlächeninhaltDatenfeldSystemprogrammAssoziativgesetzMinkowski-MetrikTropfenPhysikalisches SystemPlotterIntegralOffene MengeComputerspielKanalkapazitätEinsComputeranimation
17:28
Kartesische AbgeschlossenheitGüte der AnpassungGraphPunktSystemprogrammWeg <Topologie>QuaderURLTwitter <Softwareplattform>BitTopologieComputeranimation
18:25
Kartesische AbgeschlossenheitUnordnungRouterSpeicherabzugTelekommunikationKonditionszahlBandmatrixVorzeichen <Mathematik>BenutzerfreundlichkeitMultiplikationsoperatorUnrundheitLokales MinimumKanalkapazitätStatistikWeb SiteComputeranimationBesprechung/Interview
19:32
UnordnungRahmenproblemE-MailEreignishorizontTelekommunikationUniformer RaumGraphfärbungZahlenbereichDrahtloses lokales NetzUnrundheitWeb SiteThreadSoftwareAbfrageDelisches ProblemElektronisches ForumComputeranimationDiagramm
20:36
EreignishorizontTelekommunikationUnordnungAutomatische HandlungsplanungPlug inPolstelleRechter WinkelBenutzerfreundlichkeitValiditätHilfesystemInformationAbfrageWeb SiteQuick-SortImpulsComputeranimation
21:35
Dienst <Informatik>HardwareRechnernetzUnordnungDienst <Informatik>EreignishorizontFreewareRechter WinkelTermWellenlehreNichtlinearer OperatorReelle ZahlHardwareTechnische OptikComputeranimation
22:34
RechnernetzDatenfeldSoftwareDatenfeldCoxeter-GruppeSnake <Bildverarbeitung>Urbild <Mathematik>Abgeschlossene MengeEreignishorizontMultiplikationsoperatorEchtzeitsystemServerARM <Computerarchitektur>Vorlesung/KonferenzComputeranimation
23:52
Demoszene <Programmierung>Urbild <Mathematik>PhysikalismusMessage-PassingVorlesung/Konferenz
25:14
TelekommunikationUnordnungAliasingGruppoidHardwareBitDatenfeldEreignishorizontDatensatzComputerunterstützte ÜbersetzungFormation <Mathematik>Physikalisches SystemComputeranimation
26:41
CAN-BusBitMAPMultiplikationsoperatorZehnSkeleton <Programmierung>Nichtlinearer OperatorTouchscreenOffene MengeEreignishorizontAggregatzustandGruppenoperation
27:53
TelekommunikationUnordnungAliasingSimulationMAPBitProgrammierungHardwareRechter WinkelTouchscreenMinkowski-MetrikOverclocking
28:55
TaskTelekommunikationUnordnungMessage-PassingTopologieMultiplikationsoperatorCodierung <Programmierung>URLSpeicherabzugBrowserElektronische PublikationMAP
30:22
SenderTelekommunikationUnordnungTransmissionskoeffizientGebäude <Mathematik>BitRichtungMultiplikationsoperatorComputeranimationBesprechung/Interview
31:07
Konfiguration <Informatik>EreignishorizontElektronischer ProgrammführerInformationDigitalsignalKategorie <Mathematik>BitfehlerhäufigkeitZahlzeichenZeiger <Informatik>Kartesische AbgeschlossenheitUnordnungMultiplikationsoperatorRechenschieberDienst <Informatik>Twitter <Softwareplattform>Web SiteComputeranimation
31:56
UnordnungTelekommunikationTransmissionskoeffizientRechenschieberStreaming <Kommunikationstechnik>Projektive EbeneVersionsverwaltungDatentransferVideokonferenzEinfacher RingOffene MengeElektronische PublikationInformationsspeicherungStandardabweichungGRASS <Programm>Schreib-Lese-Kopf
33:09
TelekommunikationUnordnungElektronische PublikationEinfach zusammenhängender RaumResultanteNotebook-ComputerZahlenbereichVerschlingungVideokonferenzProgrammfehlerZweiComputeranimation
33:59
BitrateViewerEinsZweiMAPDatensatzStellenringInternetworkingStatistikCAN-BusSichtenkonzeptStreaming <Kommunikationstechnik>ZehnTafelbild
35:05
Streaming <Kommunikationstechnik>InternetworkingTLSFlash-SpeicherTranscodierungCodierung <Programmierung>CDN-NetzwerkUrbild <Mathematik>TelekommunikationUnordnungChiffrierungInternetworkingKonfiguration <Informatik>Streaming <Kommunikationstechnik>Physikalisches SystemHeegaard-ZerlegungFlash-SpeicherCodierungEinfach zusammenhängender RaumTLS
36:24
TelekommunikationVorwärtsfehlerkorrekturClientFehlererkennungProgrammierungVideokonferenzCodeHypermediaVektorraumStreaming <Kommunikationstechnik>Hilfesystem
37:36
UnordnungStreaming <Kommunikationstechnik>GatewayStreaming <Kommunikationstechnik>InternetworkingRauschen
38:17
TelekommunikationUnordnungRückkopplungEreignishorizontSynchronisierungMedianwertSynchronisierungDatensatzTouchscreenStellenringWasserdampftafelTaskPunktLeistung <Physik>Vorlesung/Konferenz
40:33
TelekommunikationUnordnungDistributionenraumBaumechanikFlächeninhaltDistributionenraumKanalkapazitätDefaultQuaderGeradeLeistung <Physik>VerknüpfungsgliedGewicht <Ausgleichsrechnung>PolstelleEinsTotal <Mathematik>WärmeübergangNormalvektorWeb SiteTransformation <Mathematik>Computeranimation
41:24
Metropolitan area networkUnordnungBaumechanikGenerator <Informatik>Gebäude <Mathematik>FlächeninhaltTelekommunikationLeistung <Physik>BeweistheorieDistributionenraumAutomatische HandlungsplanungPublic-Key-KryptosystemLeistung <Physik>MultiplikationsoperatorTotal <Mathematik>Einfach zusammenhängender RaumQuaderGebäude <Mathematik>Generator <Informatik>GraphfärbungSpieltheorieGeradeInternetworkingRechenwerkNeuroinformatikForcingNetz <Graphische Darstellung>BeweistheorieStatistikComputeranimation
43:22
UnordnungAnalogieschlussPlot <Graphische Darstellung>Leistung <Physik>GraphWeb-SeiteRahmenproblemQuellcodeDigitalsignalElement <Gruppentheorie>TouchscreenLeistung <Physik>Bildgebendes VerfahrenGenerator <Informatik>OrtsoperatorComputerspielDatenflussStatistikSchwebungVorlesung/KonferenzComputeranimation
44:30
Leistung <Physik>GraphPlot <Graphische Darstellung>AnalogieschlussFigurierte ZahlLeistung <Physik>ComputeranimationDiagramm
45:12
TelekommunikationUnordnungLeistung <Physik>AnalogieschlussSoftwareLeistung <Physik>MultiplikationsoperatorGüte der AnpassungComputerunterstützte ÜbersetzungComputeranimationVorlesung/Konferenz
46:23
MAPStreaming <Kommunikationstechnik>Computerunterstützte ÜbersetzungSoftwaretestMultiplikationsoperatorBesprechung/Interview
47:24
WürfelTransformation <Mathematik>EinsKonfiguration <Informatik>Office-PaketGenerator <Informatik>Endliche ModelltheorieRechter WinkelZehnFastringBitBesprechung/InterviewVorlesung/Konferenz
48:43
Leistung <Physik>KanalkapazitätMultiplikationsoperatorGenerator <Informatik>Besprechung/Interview
49:37
PunktForcingSymmetrieFrequenzDatensatzSichtenkonzeptOrdnung <Mathematik>Generator <Informatik>WasserdampftafelQuaderZahlenbereichBitStatistikRankingRechter WinkelSchlüsselverwaltungStellenringQuick-SortMereologieInstallation <Informatik>FreewareVideokonferenzNichtlinearer OperatorSoftwareProgrammfehlerBesprechung/Interview
53:27
BitSichtenkonzeptUnrundheitInformationBesprechung/InterviewVorlesung/Konferenz
54:47
HypermediaCD-IJSONComputeranimation
Transkript: Englisch(automatisch erzeugt)
00:14
Hi, good afternoon everyone. My name's Will and I'm here with my colleague Ian from the NOC
00:20
and we're going to show you some funny photos of what we did to build the network here. So, what's the network actually for? Here is our vision, our strategic goals and all that blah. We wanted to give everyone gigabits really these days. Some decent Wi-Fi which has worked pretty well. And also in line with our aims of having basically properly provision enhanced connectivity,
00:48
filter free, net neutrality, all those things you hear in the papers these days. And we wanted to have a lot of fun while doing it and maybe top our tans up. So, how do we design this? Well, we have an intersection team which goes and does various hacker camps.
01:05
So, it means we kind of run different size networks every year almost these days. So, but this is the biggest one ever obviously. We had 37 Darden clothes which is maybe 15 more than the previous time we've done an event.
01:24
47 fiber cables laid across the fields totaling 7.2 kilometers, 78 edge switches and just over 100 wireless access points.
01:44
We decided to run a collapsed core for this event to reduce interdependencies. So, everything as much as possible goes back to a central location. So, this site you see all around us, I mean it has a great industrial history. That means there is actually damn train tracks everywhere and unknown underground services.
02:06
And one of the problems we came up with appeared during our planning for this event is that we weren't able to without lots of special measures to really go very deep into the soil. Because there is unknown cables and there are really quite a lot of train tracks as well which make running cables difficult.
02:27
So, we didn't do this. We did however manage to do this during the build up.
02:44
Fibers do not generally survive being run over by a train and this one is kind of broken. This was due to an accident basically. Anyway, no harm done there.
03:00
So, this actually all required quite a lot of planning and we did this in a lot more depth than previously. We teamed up with Avoc for planning in OpenStreetMap. We then were able to do a lot of automation of the cable selection and putting the right cables in the right places using OpenStreetMap API and some tools we built ourselves.
03:20
And a side effect of having all that information available is that it was then publicly available to you guys. So, you can see where the dot and close are if you look at the CampMap URL. And it enables us to check the design properly and generally sort of make sure that we don't spend ages pulling a cable to find it five meters too short which is really super frustrating.
03:45
Here is a plan extracted from that of the site. You probably won't be able to see it because the site is quite large but you can check this online. But you can see our main cable routes across the site here which are the green lines.
04:01
And really we wanted to get to everywhere that you guys occupied. And here is another diagram you won't be able to see which is our physical infrastructure. And what we actually do is have a lot of multi-core fiber cables which are connected together.
04:23
And what this means is that basically we have light parts all the way from say an edge dot and close all the way over there back to the NOC data center. It means that if there is any, for instance, problems with power or similar or equipment in a more peripheral location, then that won't affect, it won't have as large an impact and then therefore makes the network more reliable for you guys.
04:48
Uplink, well we had to get the internet here. We spent some time on this because as you traveled here you know that this place is quite remote. There is not a great deal of infrastructure but then we found actually about 2 kilometers that way.
05:04
There is a large high voltage line and using our contacts we were able to get those guys to give us a 10 gig wave back to Berlin which is great. And to those of you who have seen our presentations about congress networks meant we could do much the same thing.
05:23
So yeah, there is a splice enclosure on a pylon, this is it, about 2 kilometers from the site. But it wasn't all that simple because there is a river there, there is also a lake the other side. So we would have to carry the fiber quite a long way.
05:40
So we used a lot of this lightweight fiber which actually 2 kilometers of this only weighs 20 kilos so you can actually pick it up and carry it around. So we had to cross the lake.
06:08
Our experiment showed that the canoe, the 2 man canoe was actually better for unrolling the fiber than the rubber boat because you could just put the stake through the spindle and unwind it. So yeah canoes seem pretty good for that purpose.
06:23
So yeah we had an interesting time, I can't actually use a canoe so yeah it took some time. We of course, here is the fiber in the canoe and we had to do obviously some splicing out in the field so there is quite a lot sitting getting stung by insects and all that kind of stuff.
06:44
A lot of you know about the rodent damage, in fact I have some of the offending pieces of fiber here if anyone wants to come and have a look at them. And this obviously had quite some coverage and lots of people found this quite amusing. What we actually did to mitigate this was we went along in this stretch where these animals live
07:05
and we basically invaded and hung the fiber up higher in the tree so we weren't really obstructing where they go. But yeah it's pretty easy with this thin fiber for it to be damaged by rodent activity.
07:21
So back to technical stuff. We did our layer 3 design, BGP Edge with a pair of Juniper MX104s in Berlin and here in our NOCDC in the Ezekieli Park. And then we dusted off a very venerable Force 10 E600 that I think was last used at 29 C3, 28 C3.
07:43
But did a good job there and actually was just what we needed here so why not. We did the usual, we used a CCC address space for IPv4 and then a temporary slash 16 from the RIPE NCC. And of course V6. It's pretty much the same as last time.
08:04
Just a few photos from the NOCDC, we actually even managed to label stuff this time. Edge. So we rented a lot of HP Procurve 2530 with 24 gigaports on the front.
08:28
And then uploading those into the core with 2x1 GigE. These are single fiber optics so they use two different wavelengths on the same fiber. And that reduces the amount of fiber that we need to pull around the fields and also makes troubleshooting easier.
08:40
For some locations we wanted to provide 10 GigE uplinks so we used some Juniper switches there. And we were testing some other hardware as well. Cumulus Gigabit PoE switch with 10 GigE uplinks and some Huawei switches as well. So we actually, although the bulk of the equipment was the Procurves, we have been using some other stuff as well.
09:04
Data center. It's often a pain doing this. It's been damned hot here as many of you know and really it's been that way for the past few weeks. So we have a DC container with the NOC and VOX services and the uplink actually terminates in there. We had lots and lots of aircon problems.
09:22
In fact, we started off with two aircon units and we ended up with seven. Which is more than the number of switches we had in the place. Yeah, at least it stayed cool there. I'll move over to Ian for the Wi-Fi.
09:43
So, Wi-Fi. We've been using a similar setup as at this last Congress, which was a dual Aruba controller running in a high availability setup. We've deployed over 101 802.11n and 802.11ac access points and that had an average of about 1.5 access point per dot enclose.
10:07
It would mount one access point in dot enclose and then we have another more outdoor suited IP65 access point in the neighborhood to cover the edges. Because what we see is that around the dot enclose we have around 30 meters of good coverage and then we still need to fill in the other gaps.
10:26
So we used a couple of outdoor access points. So you might have seen them hanging around at several places. These are the access points which look a little bit like security cameras, the big white dome access points.
10:41
And we are deploying multiple access points in the track tent and workshop tents like this to have more capacity. Because in a room like this there's like, what is it, 500 people there. So one access point doesn't cut it. So you would need multiple access points to have enough capacity.
11:01
So we had a peak of 2,300 associated clients and we did around 1.25 gigabits. That's RX and TX aggregated. We've seen around 10,000 unique devices. So that's what we've seen over the last couple of days and that's not concurrently online in the network.
11:27
So, yeah, we were running like at Congress 821X again. So people could use a random username password to log into the network. And so on the left side we have a nice top 11 there and you can see why we made it a top 11.
11:43
So, yeah, the device types on the network, it's mostly smartphones.
12:03
So you can see that Android and iOS are in the top three. So that's a large bit of the network are all smartphones. And then other than that we have, of course, Linux devices. And then as we would expect at CCC that this Windows is being very, very low used.
12:23
So that's good. So regarding the type of devices that are connected to the network, so more than 50% were actually 5 gigahertz capable.
12:41
So that's pretty good. Last Congress we did I think around 65%. But, yeah, it's good that we have more than 50% of devices actually being 5 gigahertz capable. And we're also seeing quite a lot of the newer generation devices, the 8211AC devices. That's already 21%. And regarding the uses of the SSIDs, yeah, about 42% of the devices were on 821X.
13:06
That's a bit less than at last Congress. But still pretty okay. So if you want to have some encryption on the Wi-Fi layer, then, yeah, use 8211X.
13:22
More pretty graphs. So this is a graph where we are plotting the number of associations per field or per region. And you can see, I'm not sure if the mouse pointer works, but you can see over here that the green lines, this peaks over here,
13:40
are actually the people getting in and out of the track tents. So we can pretty much see that which, yeah, if there's a good talk going on in a tent, so if there's lots of people there, then it must be good. And then at some point, like during night, we see people going to the east side of the camp, which is where the bear village is. So people are going to the bars and, yeah.
14:04
And you see the red line over here. That's the central platz where also the bars are. So it's pretty obvious where people are going. So the lightning storm yesterday.
14:28
Yeah, so we obviously have a lot of stuff missing here because we had to power down the DC. All the generators actually got turned off. But it's also interesting to see that the people were like moving to the,
14:43
they were getting into the big track tents. That's the blue line over here. And then people were moving to the central platz. That's the red line over here. So it's also interesting to see what's happening when you're having a situation like that. So challenges.
15:01
Yeah, we had to make a bit of a trade-off between coverage, capacity, and performance because it's a very open field. So it's, and there's not a lot of attenuation. So there's a quite large chance that access points will end up on the same channel. So we don't want to mount the access points too high.
15:23
But then if we're mounting them lower, that could actually mean that we have less coverage. So that we need to make this trade-off to get something working. And we did end up having quite a lot of high channel utilization in some areas. So that's actually the amount of time that the radio and access point is busy
15:46
receiving and sending traffic. And once that channel gets more and more loaded, it will just slow down. And at some point, it could possibly break that you will not even get an association anymore. So we had some radios that were averaging around 65% channel utilization,
16:02
which is very, very high, and peaking at 95%. Another issue we were facing is that there were a lot of rogue access points around. And that caused some devices to have some roaming issues because the device will receive so much BSS IDs.
16:22
For example, in the Sentinel plots, you can see very, very much BSS IDs around you. And your Wi-Fi device at some point will have some issues selecting the correct network because it's receiving so many beacons and so many probe responses. So in the future, we would like to use some more performance monitoring using Wi-Fi probes.
16:45
So we're looking into a solution for that so we can test the performance of the Wi-Fi network just independently of the Wi-Fi infrastructure itself. So we will have a couple of nodes connected to the network, which are doing periodic speed tests and latency tests
17:01
so we can better signal at which areas the Wi-Fi would be bad. Another problem we were facing is that the space blankets that were put around the data close actually caused a drop in 20 dB of signal attenuation. So that was very, very significant. So at some point, we had to remove the space blankets again to increase the signal on the field.
17:28
And here's another graph that actually shows the five most busy access points with the channel utilization at 5 GHz. So you can see that there are access points that are peaking up to 90% channel utilization,
17:44
even in 5 GHz band, and that's worth near DK Utrecht, Hamburg, and one of the access points in this track as well. Oh, we had another tweet today, which was pretty funny.
18:07
So, somebody's geolocation was a bit fucked up. Yeah, this was because most of the access points here, they've been used at a conference. It was actually a hack in the box in Beers from Belaag, Amsterdam,
18:21
so that's the reason why his location showed up as Beers from Belaag, Damrak, and there are some sex shops around here. So you want to take this one? So we didn't actually produce any, I don't think we produced any useable bandwidth signs this time around.
18:40
The uplink did get used quite well, peaking at 7.5 GHz out. So we're pretty happy with this next time around. I guess we'll need more, because we always need more.
19:01
We also did some instrumentation of what happened inside the camp and saw a maximum backplane capacity on the E600 of 22.5 gigabits, so there's quite some traffic flowing around the site as well, which is nice. We did a new dashboard, which you can look at, and we're always eager to add more stats to that.
19:22
We added some temperature sensors from the ICMP village and some other stuff. So, yeah, more to come there. I mean, it just shows here I've got a screenshot. It's the number of wireless users and speed and traffic used by the visitors.
19:43
All very shiny stuff, actually. Ticketing. We used OTRS for the pre-event, which is kind of a historical thing, and then for years we've always used Roundup on site because it's just really simple and people can just get started straight away with this. We only had 51 tickets come through, which is quite low, I think, actually.
20:02
So thanks very much to the NOC Help Desk for fielding the end-user queries and doing all the unplugging and stuff. You may notice we had some lights on the Dardenclaw. These were originally from the OMEN 2013 event, but they're actually very useful for us to diagnose any network problems.
20:23
So they had this interesting kind of thread. Oh, yeah, so somebody at DK Dublin was complaining that the lights on the LED poles on the Dardenclaw were too bright,
20:42
and they asked it to be switched off, and we were like, well, at least our Belgian colleague here said that stars are down, team has been dispatched, ETA 10 light years. So what's the team actually behind this?
21:02
Well, it's actually quite a lot of people. There's more than 30 people in seven sub-teams, of some of them quite young and some of them rather old gits like me. And we really started on site two weeks ago and one day today.
21:21
So I actually arrived here two weeks ago to start with the uplink stuff. And then, yeah, as I said before, at the info help desk, we're dealing with a lot of our end user queries and that sort of stuff. Actually, a lot of the equipment and services we use for this event
21:40
can't really be bought commercially. Either it costs quite a lot of money, the commercial rates, or it's short-term stuff, or you need to borrow equipment and people just don't lend their stuff out. So we actually really rely on a lot of people who give us stuff for free. And so, yeah, really, really thanks to our uplinks, KPN, Strato, Sys11,
22:03
Ediscom, who supplied the Tengigi Wave to Berlin, E-kicks and Speedbone for housing of the Berlin side of the operation. And then quite a lot of hardware from Biblio, Cumulus, SecureLink, Aruba, FlexOptics. So really thanks to those guys for giving us quite a lot of equipment
22:24
and, yes, we will send it back. So, well, goodbye.
22:41
The network in the camping fields will be torn down starting about 1900 today, after the closing presentation. We will have everything kind of gone by 10 a.m. on Tuesday. So please be kind to our fibers as you see them in the fields because we want to roll them up and use them at the next event. So thank you very much.
23:11
Yeah, yeah, I get to do this, yeah. So I probably have time for a couple of questions. If anyone has some, come up to the mic.
23:21
Is there a mic? Oh, yes, the mic's here and here. I can't see anything up here. First up on my left. You mentioned that you got the fiber to the electricity pole. Was the fiber to Berlin already included on the network or did you need to add it?
23:40
Actually, there's an electricity substation at really near Zaydenik, just down there. So the end-to-end, like the actual photons, as it were, go as far as Zaydenik and go into their DWDM equipment there. So the actual, you know, physical splice through piece of fiber is about six kilometers to camp and then it's transported there.
24:05
It's too far, really, to light. So it's not a physical pair of fibers all the way to Berlin. It goes into an optical network, which is pretty standard, really.
24:20
Do we see any more questions? No, then thanks very much. I'll pass over to the WOC.
24:42
I don't know how to use the such an device with an apple on it. If it had a pangolin, I might be able to get it running.
25:06
Yeah, that looks pretty nice. So it's really, really cool to see that tent filled with people wanting to see what we build and how we build it. Oh, it's running automatically. Interesting.
25:21
Maybe composite? Thank you. So it's nice to see the tent being filled with people like that. But as you may know, there are a lot of people on the campsite, tearing down their villages or already on their way home or some may even not have been able to come here in the first place.
25:41
And we want the great experience you have here at least a little bit shared with them. So we are the WOC and we are providing recording and streaming for the lecture halls and a little bit for the music on the field. And you may have seen our small and larger cats standing all around here.
26:05
So the WOC here is not alone. We don't have enough hardware to do an event great as this or like the Congress. And we always have helpers. And this year we had like the AJS and a guy from iSystems here who provided hardware and also a lot of support.
26:25
And they both of them run one of the big tents. And we really like to say thank you to them because without them that would not have been possible at all.
26:45
So as you might have heard, the action is intense here on the camp and tents are a little bit more complicated than like a full operational Congress center. So the day we arrived here and tried to build our stuff up, we had like a tent with no walls and no floor in it.
27:07
And so guys from us started climbing around and hanging all the screens there and the beamers. And actually you may have noticed that the tents stayed like in the skeleton state for quite a bit of time.
27:22
And that was not really planned like that. So between the build-up team of the tents finishing the tents and we are setting up the audio-video equipment, there was like 0.125 days. So only some hours and it was really, really hard to get it all done until the opening event. But we managed to do it and it was really hard for the team.
27:44
So in the end, give the team who's not here on the stage a really big applause because they really, really worked hard to get that done.
28:02
So we have three main stages here. We have the both tents, north and south. And yeah, their names are a bit complicated. They switch places sometimes and also got new names. So there was a little bit of confusion there but in the end we managed to make it. And we also had the best stage in the Berlin village which is a really interesting program.
28:23
And those were the three main stages we were working on. And additionally we had our container, one of the OCs container. The NOC had one similar, you can see it on the screen. But the POC did it right and they had like a big tent and a lot of space to party.
28:40
So next year we will try to learn from, no, in four years we will try to learn from the POC. As you can see, we have a lot of hardware there but the main device is standing in the DC. So what we have in our container is those really nice telelights.
29:01
They don't only show the time but whenever something happens like a talk starts or a talk finishes or one of our encoder processes stops working, they will blink and display a message what's happening and they are used over the air. So we can carry them to the different locations we're going and always be notified.
29:21
And also we have this really nice blinking light underneath it which actually makes a sound when it's turning because it's that crappy. But you can hear when something goes wrong which is also really good. So what we have in the DC are two new devices that we bought in the beginning of the year.
29:42
We call them the Minions because they're really small. They're like 10 centimeters wide but they're really, really powerful cores. They have like 4 or 3.9 GHz i7 cores and they did the whole encoding for the master HD files for the whole campsite. At least half of the talks they did even twice because we missed something.
30:06
Those are the two devices who are actually producing the files you are downloading and viewing in the browser. At least most of them. And these are really, really nice devices and I really like them because they're so small you can carry them around with you no problems.
30:22
We also have a bit of interesting gear standing beneath the ceiling of one of the buildings in this direction. The one with the high chimney. We have Rodan Schwartz transmitter equipment for FM radio and DVB-T and DVB-T2 I think.
30:43
And these are really interesting devices and we have played quite a lot with them. Actually you were even able to listen to the talks going on here and the special radio while you were driving around Sedinik and doing your buy-ins for your village.
31:03
And the DVB-T also had a lot of special features like we had EPG running after some time. And even managed to get, oh, is there a slide missing? Oh, I think there's a slide missing. We had even Teletext working and like Twitter feed on Teletext and this.
31:34
But we weren't able to finish that as fine as we wanted to. Like we wanted you to be able to enter your own Teletext sites and send them out via DVB-T.
31:46
And we are really looking forward to get that working at the next Congress. So everyone can have their own Teletext website then. So another thing, like FM transmission is pretty old and we like the new stuff.
32:03
So we had guys from the Open Digital Radio Project bringing DAB plus transmitters there. And they also managed to stream slideshow versions of the video via DAB plus. There are not many, not that many radios out there that can receive that, but they brought us one.
32:21
And it seems it actually worked pretty well. So maybe it's really the future, I think, maybe-ish. So yeah, there were some special projects we have been working on. Like when the thunderstorm was announced, the CERT asked us if we can do a video explaining to the nerds how to secure their tent.
32:47
And as you may have seen, there were some tents that were really in need of a little help there. And the thing is, we used that video, we reproduced it, we cutted it, and then we uploaded it to our storage.
33:00
Like we have a mirror of video files on the campsite and went away. And when we came back and looked at our graphs, they kind of looked like this. And what you see there is like 1.2 terabytes of traffic produced by this single file.
33:21
And we were like limiting out our Wangee link. Actually, yeah, it was peaking like around 1.2 gigabits per second for an hour or two hours, because everyone on the campsite was viewing that one video. And looking at the connection stats, we saw like 50,000 people watching that.
33:45
It's like crazy. We were not really sure, it might have been a software bug of someone's notebook downloading a file over and over again, but even then, the numbers are pretty awesome. So as I'm talking about the stats, we also have a nice dashboard, and it's actually the same technology as you have.
34:05
But ours is not public, I think. And as you can see in the top row, we peaked like at 2 gigabits per second with the streaming, so that's all streaming relays added up. The one on the campsite, we have a local relay here, as well as the ones in the internet.
34:23
And that's actually not that much. Like on the Congress, we were like around 17 gigabits. But hey, okay, it's about the sunshine and camping, and yeah, I know you're likely not to watch the streams, but they were great talks, so maybe you should take a look at the recordings then.
34:42
We peaked around 600 viewers at all stages, but the biggest stage was actually the bear stage, with about 400 viewers watching a podcast there, and it was more than the peak at the tent. So yeah, actually it seems to be the bear stage was more interesting.
35:05
Looking at the stats, we had like two relays on the internet, and one here on the campsite, and we did split routing and split systems, so that people watching on the campsite got their traffic from the relays here, and people watching from the internet got their traffic from there.
35:24
This year, we used HTTP all the way down, so all streams were delivered via HTTP, and this enabled us to enable TLS. And deliver at least the option for everybody to watch the streams via an encrypted connection, because encrypt everything would be the right thing to do.
35:44
And this also meant that we didn't need to use Flash anymore, so we totally scratched that. And because we know the hamsters around here,
36:03
we decided to do everything really required to run our system on the site, so we did all transcoding and all release encoding on the site, and it turned out to be a good idea. And yeah, this was a little different to what we did at Congress,
36:20
so it was a new thing for us to do everything here. Yeah, we tried to do multicast. Well, we planned to try to do it, but actually the problem is that with multicast, you're sending out every packet once, and if the device didn't receive it properly, then it's gone.
36:43
So we need some kind of forward error correction on our streams, inside the video stream, for example. And we have the code to do that, because we're using that on DVBTE too, but it seems there is no device and no program out there able to play that back. So VLC doesn't do it, and FFmpeg doesn't do it.
37:03
So if you're working on a media player, and want to help implementing forward error correction in the player, so next year we can use multicast, then please talk with us, we would really like some help there.
37:20
But it isn't actually necessary on the campsite, because we had DVBTE here, but at the Congress it might be interesting there. And I think the NOC would like to see some multicast traffic too, wouldn't you? Yeah, multicast. So yeah, we announced like a YOLO stream, so everyone on the campsite that has anything that makes any kind of noise,
37:43
or sound, or music, or the like, would be able to share that with the internet, but we didn't really get to implementing it, but we will try to do that on the Congress, so be prepared. If you have anything that makes any kind of sound or music,
38:01
or want to share anything, be prepared to stream something to an icecast gateway on the Congress, we will have a gateway there for you, so everyone on the internet and on the Congress can listen to what you're producing. Yeah, and that's all from the,
38:21
oh, there's the screen from the teletext actually. So that's all we have to say, and say thank you to the cats, and thank you to all the people, and before you're leaving, we have our local mirror with all recordings of all talks here, and it's connected via 10GE to the Dutton Close,
38:43
so you can start your air sync now, and take all the recordings with you, and yeah, see you soon.
40:21
Yeah, so I should talk to you about the power here at the camp. It was quite a tricky task to do it here. We've started way before the start point set here in March, and talked to the big net company here in EDIS. If it is possible to place some transformators outside,
40:42
but the capacity of the lines here are all not working for us, so you can use it as a museum, but you cannot get any more power. So we started here on the site on the 31st, and to deliver our material and build the backstage,
41:02
and yeah, we've installed a total like 30 kilometers of cables, 224 power distribution boxes, that's only the CE connector ones, so we have about 500 normal connector boxes outside at the whole campsite to deliver you all the power you needed.
41:22
We've planned with many more powers, and it's really used, that's the plan of the whole campsite, it was also in the public wiki, and the company that does the sanitary installation here told us they alone would use about 400 kilowatt of power all the time.
41:42
I don't see it, but I think all the showers would be cold, but I don't know it. At total we have here seven generators, five connection boxes, which we have, yeah, get the, instead of a museum's building, there was our connection box,
42:01
and yeah, I think it works fine, the network, the power network, we don't have much problems, we had one generator failure in this morning at the shower spot and the disco, and we had one, yeah, like burning box,
42:21
there's some burning RCD in it, it was on I think day minus one, and we had an defective power line on day three, so I will come to the statistic later because this computer doesn't have any internet. We had a nice graphics, we'll see it later from the POC,
42:44
so thanks for the POC for doing it, and we had a whole bunch of angels that helped us here, going around, seeing if the cables are not too hot, and clicking the RCD back in if you have tried to get it out, and we had a lot of rain proof installation,
43:01
I don't have a thing about so much, since the thunderstorm, since we take out all generators and plugging them back in, and yeah, mainly all things are working, so some light installation doesn't work, but yeah, that's normal, but the whole rest was working fine, so we could,
43:41
so do we get an image? So that's the portal of the POC has built for us, you see the live screen of all powers that's used from the generators, so I think this one is right out right now, but the rest is live data, and you also have, yep, also these live statistics,
44:16
so yep, so we have one big generator,
44:33
standing at the NOCDC, that was able to switch on another generator if we have a power failure,
44:40
but I think we don't have any failures at the NOCDC, only the big one at the thunderstorm, and also the tents, I don't hear any failures about it, so in total, everything works fine, and if it just will work,
45:12
yeah, I would like to show you the whole power consumption of the network, but because of the big power failure today,
45:21
and the generator doesn't work, I cannot prepare it, so I don't think I will find it, you can talk to the POC, they should have it, and yeah, that was it from my side, so I hope it has worked for you, and you don't have so much power failures, but yeah, if you have any questions, you can ask them.
46:00
Hi. Good evening. Good evening, ladies and gentlemen. My name is Petanje, and the question is about the cat. Why is the cat standing there? Oh, okay, yeah. That's an interesting question.
46:21
Some time ago, the VOC started, when the VOC started to record lectures, we had all the things set up, all cameras running, all streams running, everything looked fine, and the hall was empty. So we said, okay, everything's fine, we can go and relax. But then the hall started to fill, and people were standing on the stages,
46:40
and someone asked, well, why is the stage still empty on the stream? And the thing is that the system in between has failed in a way that it repeated the same frame over and over again, and we didn't notice until the talk started. So we decided we would have something or somebody on the stage
47:01
who's moving constantly, but we wouldn't get an angel for their dancing all the day on the stage. So we decided to get some cats that are moving all the time, and this is our test if our stream and our setup is working, because if they don't move, either the battery is empty or the stream is dead.
47:27
Actually, we're really liking them, so they're traveling with us to every event, and we really are caring for them. We also have a big one at the cube in the office, the mother of the small ones in the tents, maybe.
47:45
Hello, I am interested in the reasoning about middle voltage transformers versus generators, and was it no option to install transformers and connect to the global grid?
48:01
Well, first this. Yes, it would be possible. We have like the Noxat, there was a big transformer station near Seidmanik, but it would have cost us more money than we had, and it would only be profitable if we do the whole camp for about one or one and a half months, then it would be okay.
48:30
And just another question, why were the showers electrically heated instead of burnt fuel? Yeah, I would really like to say why, but I don't know it.
48:43
I was getting a piece of paper, and there was 401.1 kV for the whole shower installation, and I said okay.
49:01
Do you know how much fuel was consumed by the generators? How many liters of diesel did we camp? I think it should be, we have tanked the last bit today, and it is about 30,000. Okay, thanks. How much percentage of the installed capacity was actually utilized by the camp?
49:25
Could you repeat it? Well, you said you have lots of generators, and you had too much power for, but we didn't use enough power, so how little did we use? So we had here on the camp, we have calculated with 200 watt per person.
49:41
It would be about by 4,500 visitors, about 0.9 megawatt, and we had the 400 kV for the installation of the sanitaries, and we had the light and other things, so we planned with about 1.8 to 2.5 megawatt, and we are using right now, I think about, in the highest peak it was about 500 kV.
50:09
Why is the net frequency only 44 Hz instead of 50? That's the whole camp, so it is possible that some generators are a little bit lower, and some are a little bit higher, so we had to check all generators,
50:24
and it's calculated, I don't know where's the bug. Is it the broken one? So here we have 49.98, and... Isn't that the broken one? Is it the broken one or not? Ah, it's possible that the broken one is not connected right now.
50:42
Thank you. I was wondering, I don't see anyone from the WOC, from the Water Operations Center, can any of you give us any stats on water consumptions or supplies or whatever? Maybe they're still drinking.
51:01
Right, they're drinking the black water, right? I think that yesterday the water was empty, and we had to get a new one, or it was before two days, so we've used really much water. First, thanks of all, everything worked perfectly as intended.
51:24
Big up, really cool. I would be interested about the costs, actually. I'm not sure if you are allowed to talk about it, but I don't want to see numbers, I just want to know, POC, this one-third of the whole budget, WOC, you know, just the technical costs.
51:43
But maybe it's a secret. I can talk about this from the NOC point of view. As we already mentioned, actually, we get a lot of stuff for free from people,
52:00
or there's not really a market to buy the thing we need. So actually most of our expenditure is on really like ancillary, you know, how many cable ties and this kind of stuff we have, and all of our work is done by volunteers, just like other teams. So it turns out that it's not that expensive for a network point of view.
52:26
Well, like, you saw the minions, the small boxes we have to encode the video, and we have those new since the last Congress, but they are not only for the camp here. So we will use them at a lot of the small conferences of the CCC,
52:44
and also assorted conferences and meet-ups all the way down. So even if they count, maybe, I don't know really how, from rich budget they have calculated, but even if they count to here, they are not gone at the end of the camp,
53:02
so we will use them for the next years to do what we're doing without any charge to small conferences. My point, I cannot say, I don't know the whole budget here, so I think it's really a big part of the whole budget for the electrical installation,
53:21
because it's so much that the diesel or high tools that we are using here, it's the generators, but you need it, because you don't know how many people you will get. Some villages said they need 63 amps, only for them, they don't use it right now. So I think it's quite a big part, but yeah.
53:45
Are there any more questions? Well then, give one applause to all the people working here, all the agents helping out.
54:15
So a really, really big thank you for all the figures, all the interesting stuff and interesting information.
54:23
This is for the infrastructure review, and so please also a really big round of applause for Will Ayan, Mastermind and Fengel.