Cloudbasierte Geodateninfrastruktur für den Glasfaserrollout in der Deutschen Telekom AG

Video thumbnail (Frame 0) Video thumbnail (Frame 986) Video thumbnail (Frame 2333) Video thumbnail (Frame 3335) Video thumbnail (Frame 4790) Video thumbnail (Frame 6022) Video thumbnail (Frame 7405) Video thumbnail (Frame 10441) Video thumbnail (Frame 13066) Video thumbnail (Frame 19071) Video thumbnail (Frame 22988) Video thumbnail (Frame 23926) Video thumbnail (Frame 26841) Video thumbnail (Frame 27409) Video thumbnail (Frame 28077) Video thumbnail (Frame 30525) Video thumbnail (Frame 31189) Video thumbnail (Frame 41073)
Video in TIB AV-Portal: Cloudbasierte Geodateninfrastruktur für den Glasfaserrollout in der Deutschen Telekom AG

Formal Metadata

Cloudbasierte Geodateninfrastruktur für den Glasfaserrollout in der Deutschen Telekom AG
Title of Series
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
Überblick über den Planungsprozess der Deutschen Telekom AG für den Glasfaserrollout mit Fokus auf den Aufbau und Prozesse einer Cloud-basierten Geodateninfrastruktur.
YES <Computer> Telecommunication Spatial data infrastructure
Telecommunication Spatial data infrastructure Statistics
Content (media) Factory (trading post)
Very-high-bit-rate digital subscriber line Turbo-Code Determinant
Spatial data infrastructure
GeoServer Database Spatial data infrastructure Berechnung Router (computing)
Berechnung Spatial data infrastructure
Telecommunication API Spatial data infrastructure Point cloud
Telecommunication Configuration space Spatial data infrastructure Determinant Point cloud
Telecommunication Spatial data infrastructure Windows Registry Point cloud
Telecommunication Software repository Windows Registry Determinant
YES <Computer> Telecommunication
Yes the we speak something there about the cloud-based geo infrastructure fiberglass rollout of german telekom tell and hopefully many Successful things because glass is water yes, what we all have and exactly good morning together my name is tobias cheeky i work in Bonn at deutsche telekom gmbh and I am glad that today I am the cloud based geodata infrastructure can present
We are part of the project with us in the telecom internally as the program ftth factory and there one important a central role in the Planning fiber optic removal plays when
So one tells so I go to the forest gravel and tell us something about ours geodata infrastructure then comes Of course the first to open the question source software in the telekom im gis area that is actually allowed you that and yes we are allowed to do that and it works very well and you will be in the lecture also states that we are not just open source software use but also on open geo access data to process it then do something for the first time
Content That's the first time I will explain what is this actually ftth factory of which I just happened I would like to speak briefly communicate or show what is the contributing to the good infrastructure for the fdp factory afterwards also show how is the gdi is built or how will it be built within the cloud and in connection again show how the we are now planning our application in this in these systems in the system environments using cad and At the end I will show you shortly who works with us all together and that makes the ftth factory possible
We have to clear that up first
probably first what is Actually, he stands s fiber to
the home that means in contrast to the others Technologies is there dsl vdsl is running the fiber optic cable into the house and there if, for example Apartment house is also right in the apartment that is the end customer has subsequently to home directly to his model one fiber optic connection dsl vdsl is that cause he still has the copper cable running continuously to the distributor and from there into the house then s vdsl gave s vdsl there runs the fiber optic cable to the distribution box and from there but still over the copper cable into the house now meanwhile with vectoring procedure Of course there are those too Possibility also there speed rauszuholen but most or the
we have top speed of course with the fiberglass with ftth right in the apartment so problem is the
current the current planning process fiber optic planning process is very manually with a lot of manual connected activities there is a very low potential to automate the whole thing the scaling is actually only possible by more manpower and thereby not really suitable for high output quantity well and that is That's where we have to start with Company goals to be fulfilled Can and more fiberglass removal to operate and therefore became
the project ftth factory launched where it now this is all about this to automate the entire planning process scalable and that at more consistent at the same time Capacity use and consistent Capacity use is here at the Make it clear that it works not that we are now build automation and thus after jobs can be abolished that makes everything still the machine is about the existing one existing planners simply more areas can plan because the software just him helps certain things previously manual have now automated too make and that means he is the idea that the planner in munich possibly also the network in hamburg can plan so now the contribution of geo infra
lower geodata infrastructure for this project ftth factory first all
Classically quite a classic task a geodata infrastructure that is how it is about procurement and providing geodata Here you can actually see our two main data sources we others components so outside the geodata infrastructure for the fdp a provide factory that are once open street data or openstreetmap Data and alk is data at the osm data we use an ios m in the portal of the whole in the mailboxes bank makes use of below-down basis of impossible and at alk is we use the fmg and then an old one model to play in the mailboxes bank Of course, there are others besides data heads that we connect there or data sources we offer connect who walk the moderately alone and to the but part of it is also involved also about open data you so im can tap on the internet on the Issue page I want to say times one here then we use that framework the orchestra that is that geo server geo net is working there are also a security proxy here which shields the whole and that means Ultimately, it's about us this geodata is standardized in companies provide services and that's what we do Provide metadata where to look what do we have in the geodata infrastructure and actually should that depending on her are no geo data about how these issue the no metadata have the next one
next post we within the fdp a factory afford is the extraction of surface data and the calculation of potential routes sounds now first very complicated one starts on best back with the potential This is a potential route if I'm on a street now want to lay my fiber optic cable now I actually have 22 possibilities that to bury so open is the official expression actually in the telecom technology either namely the route the Maybe there I am I may have on the street a copper cable already lies that is I could rip it up and do it place or possibly the empty tube use what else is there or me Of course I could just as well look at the surface of the street and and maybe the life of the sidewalks and maybe there's a green strip and then decide what's in there is a cheaper one alternative than maybe there now muster and trick cable and maybe I can not get up dodge a green area of course much easier where I have a lot easier to lay the cable or even the road is easy for that suitable from the surface because it is asphalt and not one paving stone as trading with it Can work is easy milling procedure where I just do not just whole so deep purely freese in the street and then the cable would just be there so that I have these two variants
then that has the existing track and just new potentials pass the maybe they're just the ones cheapest one a cheap one alternative in the case are and the then I have these two variants That's why it's the question how do we get to this potential get ran that means nobody is going there now and goes in every street and says here could you here that would be very in manual process that also means the calculated how you like these surfaces data and that happens in two ways once we have the so-called deep surface kdh the video a google car the laser scanning through the area drives through the streets and drives photos makes all this data go then into an artificial neural network so this is not an end and there are then that component of fraunhofer and with this The components are then the lasers so to speak, the point clouds will that classified and we therefore prefer the Classification of the surface after from this the other variant is namely exactly where the car does not drive long could or was allowed to take we took the place and take otto photos that means these are aerial photographs and the We send this through a module the active one is running and then we do also because a surface classification that means in the end we have once after this first process so we have the complete the complete area development area classified and in the next step Let's go now and then we will link in the next step We link the classified the classes with cost factors and lead then a routing through with the software active and then get one for each street potential the potential paths so cheap roads favorable train paths by which we where we could expand the glass fiber so this data is ultimately lions that in a post database be saved and then will another component in the inside the fdp effects like area set and this component then decides because which then has access to the existing routes and on the empty pipes and now has the possibility between ours potential paths for the teachers and the existing routes and select simply that the cheapest variant to choose and that would be the Eventual the route I follow to be expanded in the development area of ??????another important contribution we then in the ftth factory afford the locations and way assurance that means i can not just mine somewhere place distribution boxes wherever I want and I can not just go anywhere dig where I want that means that has to always agreed with the community become and also there we support the planner, for example, by the Site safety already give hints what is public reason what is private land on public land is just quite simple since a distribution box in accordance with the commune to build and besides, the Planners then already produce the documents He can then submit to the municipality here the money will become the whole the so-called gdi viewer then take place or that takes place there that is a component at the bottom of this javascript application javascript Applications the lower and lower tanks since I'm based on shoes so too in the open source components yes, these are the contributions, so to speak we go to the gdi to the ftth factory Afford
how is the geodata infrastructure now built up I already said that or that is yes even the title yes in that a cloud-based geodata infrastructure means ours
main component is the open telekom cloud which an open stage ag offers that is, open stack is an api one standardized api about the me vms in the cloud can say to order there are in the open telekom cloud So-called tendencies that are so to speak workspaces that means we have two works paces one for the not productive area and for the productive area and the other important component of course now I just called in at infrastructure esco must be present Of course, there is a code repository where the code is in there and that we have a tool that is called telekom workbench will be with us internally offered and that costs among other things also a hitler that we where we then save our entire code That was already the case the word infrastructure is or the Designation infrastructure is what means in fact means actually that i have the whole systems environment at any time from nowhere can build and all in kot that is called skripte configurations and so to speak I can print the entire button rebuild infrastructure and there we use different ones components ie in the first step is it so to speak, are the whole vms bond in this in the open telekom cloud we use the tool for this terra form before the provisioning means there are in these konfigurations or this script files from tera vom is in it how many vms do we need how much frame do they have how much Memory offered as connected as a lot of cpu have that everything is in it and if I let the rope go that also works within gap that means gibson means these skripte values ??????in gap executed on the cloud over the open stack api and then be in the first step just now that whole vms looking forward then sees that
like this now that means you can see it here such a pipeline on the left side The whole thing is planned with the first reform that means you look first would the scripts ever So work are the fans for For example , not yet there and should he should put it on but we are still not there so good in the next step then come apply that means there will be that script ridiculously executed and am conclusion is again has test that is then Come convergence and there is looked really has this script too really made the vm as they did I should have generated
So the whole vms in the first step and now I come in the next step go and have this whole vm there is now only an image and linux image saves eg image is running there now Of course I have to do that now configure and for that we take in develop second step and also puppet means we go find it now install on this vms software Application configure the and diploma eg ssh gravel on it all that is everything automates everything in the head shot since nothing is manually made as an example, there will be a proxy configured because the internet from our infrastructure to the internet too come and for example external service Need a proxy of course which is now configured and then we have that during ibm Do identity management and one too ftp server and also the database and gave an application database a geodaten bank because they are with puppet in the moment you also create while in china so decorated and a very very important part is on the left side because we are for the application this will be done in the next step 7 all applications we have running Docr based and that means then with so many applications one orchestration of the whole and take we cosmopolitan shift that he is on base of cuba nice and for openshift online of course then in the first step the whole vms and configure them afterwards so they there installed can be used in front openshift and also the installation of openshift can now be found directly Lippstadt is ours that means nobody goes there and does pressure had to manually something Executions on command line level I already said the application new run after all with docr and that
is and thereh in everything in openshift this orange box here that is project within openshift and that Make deployment like you with the tool bright ie with helmet there are these whole applications there in openshift the broke in this project and yes that is then actually one after another, so to speak system environment ie the combination of
applications and applications running the Docr in an openshift project and one an application database and one a geodata bank both of course, too postgresql based and geodaten bank will of course post is used then make one we have a system environment of four piece ie death and day jackpot and now
Therefore, the question arises how can you now the whole applications then the plan and we use in
continuous integration and child continuing deployment means we distinguish between one here where is the code of the Application in and release repositories that is on the left The developers check their code page and that eventually leads to one image that is formed and in one Doctrine is filed and that's kind of under artifact at that nothing is changed every change to the code after each So it leads to an image that there is filed and on the right side we then have release repositories there So we have for each for each system environment exactly in it which version of which applications So what image of which applications because used there should be on the particular environment that is in debstedt for example when bdi viewer latest means i take always the last loose image of gt viewer and through this deployment configuration I will have here So there again and again lower Looked there's something new and then will be fetched automatically and if I am now, for example, in staging go and there is now time that day 1.0 and I change that from 1.0 to 151 will automatically be followed by a workflow or delivered and performed and after that I then staging this new version 1.1 then just can
in a nutshell what we work out for ourselves because together I now do not go alone on the companies but you can see that are
very many companies and very many open source companies also here too have already lectured and also with stalls are represented and then I thank you for that attention and look forward to asking
many thanks in the meantime many times there is a lot of technology to ask directly [Applause] Thank you for the lecture you had on a slide mentions the above telekom cloud is it what you call external also can use as a service yes that is also in the telecom cloud normal and the whole time cloud provider is also running is special Everyone can use this and that's the advantage because we have that the whole flying and flying in germany partly complete now compatibility there are others existing cloud solutions that means you can do it that way as a normal consumer also use yes a small addition to the telecom is the bsi certified federal office for security in information technology I can do that now so I can do not be sure to say thank you again just a motivation of this automation is actually working more about the number of qualified People are running out or it's about time Processes are just too long if you can it makes it classic or it is that way it simply in relation to the achievement The planning is so expensive if you do not actually actually automates all All three things that means you need would need more staff if that then could do if you do it manually and the current process will stop because there are so-called design office is that means it is also currently done differently nationwide so because you just want a standard you can do that later with that namely then the next step easy Better predict what can cost me then and of course it goes also to optimize that as possible also in the price ask clearly hello, what else can you do about it? Persistent pages say so if you For example, planning data was made somehow and then there are people in the database come new then because the data in there Sorry again the the Persistent data so you make it to example planning data then also in the postbox who saved exactly when now you new people so in a double container the databases yes okay then So the databases were because of that which is also on the right side then as vm the plans actually the do not run in the openshift cluster Hopefully it's called up and may now and then you would have a change the version actually did that how you do it the way you do it at the moment so the databases do not have to run as a dogma container and become now anytime just quick with version swapped that is still something you have to plan well what you need also automate course but after that, that's nothing everyday happens because otherwise clear you have to play the data again of course you can do that too, of course to wrap in scripts but that will not synonymous of ida database versions how the same that means there is in that moment probably always one too manual preliminary work necessary clear the found paths will be the then also digitally passed on like that that one of the usual now the telekom now available pdf So it comes from planning area that's why the route is after published as dxf good question when If you can water we would probably within ours Projectes probably like to do also for example the laser data the The car has received perhaps even when the open data out unfortunately that is not in ours hand we are already trying to impulse give Of course it would be nice if that company would opt for it but it's the two paths as well always often a problem because of course yours can be safety-related things that means I say so that Federal Chancellery I can not know with which path that and where the distribution box there is that means there is already a safety reference that is I can not just walk everybody like that to give to the public that will do not work we would still make a question still interested to have them now said cross-country skiing in the databases not on joker but now the question build them physically so classic on the wms again because there is yes in the cloud also typical db services as well as amazon and escher and soja which variants do you have there? So now that's really what I want It does not say classic because we puppet take and puppet has so again a meta-level in between that means I say I would like to have one install database and also the and the schemes have there with which the users write the puppet code so to speak that means I do not write any That's why I do not want to say sql code classic but of course I class in this sense when you say it works a vm installed as a separate service We do not use the services of the cloud that too has just found a reason to security reasons It means we have also allowed halt We must be allowed these services Then use it and there are of course a problem that there There may not be a post available yes that's a similar problem we to have I have not seen her yet said loud but actually use they do nothing but ms and do infrastructures co we do too classic on physical sheet metal next step would be with the whole use cloud service like load balancing so what are you doing there yes yes yes, yes , that I only have it out complete old tradition too complex of course we have loadbalancer behind it we have the services behind it we already use that as far as that possible we take everything we use what comes out of the cloud because then you work less but alright I have to do that now the moment Other questions could certainly be asked nachgang still clarified yes please I thank you again