Logo TIB AV-Portal Logo TIB AV-Portal

Validating services and data in an SDI

Video in TIB AV-Portal: Validating services and data in an SDI

Formal Metadata

Validating services and data in an SDI
Title of Series
Part Number
Number of Parts
CC Attribution 3.0 Germany:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Release Date

Content Metadata

Subject Area
To achieve interoperability in a spatial data infrastructure (SDI), conformance to specifications is essential for services and data. Service and data providers need a capability to validate their components. For several OGC standards, the OGC CITE tests provide such a capability. This covers base standards, but in SDIs typically additional specifications are added, for example, service profiles or data specifications. In the European Location Framework (ELF) the test framework ETF is used to validate INSPIRE services and data provided by National Mapping Authorities against the INSPIRE Technical Guidelines as well as against ELF-specific requirements. ETF is a test framework for spatial data infrastructure components. It supports SoapUI (for testing web services) and BaseX (for testing XML documents, including very large ones) as test engines to develop and execute test suites. ETF has been implemented in several iterations over recent years as existing open source test environments could not be configured to provide uniform test reports that were readable by and useful for non-developers. Outside of the ELF project, ETF is currently mainly used in Germany and the Netherlands, partly extending the INSPIRE-specific tests based on national profiles. We present the approach for developing user-friendly test suites and discuss typical issues that have been encountered in the ELF testing.
testing point Slides presentation Actions Open Source files map interactive lemma 3rd sets part perspective service Central Meeting/Interview Representation level framework presentation web addition unit response map validation logical consistency Development projects bits meetings applications Location words Computer animation Software service provide framework organization Right conformal Stammdaten Results
software engineers tier server bottom integrators Content service Bewegungsunschärfe level Direct sum map views point provide cores testing single Computer animation service provide platforms organization level Security
testing tier control interactive capacity part metadata tests geometric service Central goods specific cores web applications encoding localization Security addition map capacity views provide metadata cores bits schemes testing specific Computer animation Software service platforms Security conformal
Open Source interactive capacity ones metadata schemes testing specific Computer animation Software service orders web applications localization systems
testing standards presentation exiting integrators views part geometric service framework localization errors exception map views Development metadata effects bits schemes testing specific processes phase Right absolute values level cycle conformal Results point functionality Integrationen interactive capacity maximal theoretical tests Euklidischer Ring validation information help analysis ease of use errors Computer animation service functions iterations report
testing interactive capacity metadata sets Databases schemes testing tests geometric specific service Computer animation service web applications localization
testing Slides link views time interactive Guide sets instance student proteins tests versions geometric service different configuration Software cores Indexable Direct structure extent report data types web Maßstab validation views logical consistency projects Axonometrie cores Databases predicate tests testing Indexable web applications report Computer animation Software predicate service versions PDF geometric
circuits components information projects life drivers drivers part metadata testing tests service Computer animation record
testing presentation Open Source Temporal functions part sun tests versions service mechanisms graphical user interface iterations memory case configuration localization report web user interfaces plugin standards response information projects Open Source maximal limitations tests open subsets testing mechanisms graphical user interface web applications Location means message-based processes Computer animation environment case service classes Results extent
web pages information link web pages interactive images sources projects coma tests Emulation testing tests image wikis Computer animation repository orders repository extent
testing building map life coma Continuous Integration fan tests testing product MACH Computer animation environment framework key sum current extent
suite neural network part tests domain structured data specific Ariana TV case framework model encoding extent Abstract report domain user interfaces suite plan REST API drivers tests testing report Computer animation interfaces framework platforms Right key Abstract extent
testing infinity counting tests domain structured data case law Gamma Abstract man rules suite drivers tests testing web applications unit hydrograph report Computer animation interfaces framework theorems classes key extent WAN
current testing functionality presentation integrators directions views time shape perspective tests image specific service mathematics Meeting/Interview Representation framework share report comparison projects unit testing limitations C's Computer animation Query case Stammdaten Results
so we continue with the next
speaker glimmers was and I yeah I really like it is so when I saw the opportunity to be speaker on this agenda on this on this conference and I so this this lot with at the denied lemmas and Georgia's at all and that's the 1 I O 1 the reason is this a good group could be a good work is on and I'm myself I'm frequent user of the and just framework which it claims is going to present some really looking forward to his representation and he just told me that that this is his 1st files so so let's that although he has visited hundreds of OTC meetings and this is the 1st was viewed so let's embrace the here that he has had he hasn't had uh a couple of very nice open-source projects like a should change and then I'll be isn't framework and then it is a really useful tools in India CI development so some I pressed the wrong button so you don't see any slides right now but hopefully will will get that fixed but and but I'll start anyway so that this is a presentation that I'm giving so basically it's 3 organizations and that have been working on this activity of its a presentation that kind of spilled into 2 this so it's about validation services and data an STI Summit and the 1st word we're looking at is is on the large European projects ELF European Location Framework that is working on but how large is the eye activity from and which provides the background for for for what I will be talking about and the related to the experiences that we have and and the 2nd Board will be on an open source software that is mostly has been developed by interventions by our company the and and and so I'll I'll present also a little bit about that and so these 3 organizations that are at some kind contributed to this is the topic at the Norwegian Mapping Agency and watermelon he is actually the work package leader in the European Location framework projects and that tries to build up all these services providing the data and also ties pensions from Geonovum in the Netherlands who also contributed to the to to the testing and this alright so good because those who can't continue right so uh and provide a little bit about background about the ELF project is as I mentioned it's a large European project it runs for 44 months it will finish in october I it has more than for the 40 partners 23 of those are national mapping and cadastral agencies so they provide data and it's it's their response to requirements on the European level to inspire and other requirements and to provide their data is consistent reference data for all over Europe and the so good but there what that means is that right now we do have more than 100 service is already so that provide national data sets to basically to inspire and also to the to the infrastructure that the European location but project also has built so it's from currently from 13 service providers if you joined earlier this year is still working on their own on their their services and all these services need to be validated and tested right if you've been here in the previous session you've heard about that from the OGC perspective and and we need to test and validate all the services that they meet the requirements Due to the requirements and other requirements that that we're looking at in addition what the project also does is because of uh the idea is that you don't have to connect to the 100 service endpoints are eventually maybe 200 or more but but there's also the approach of providing central axis points that actually cascade but the results from the National services so there is also a service center of service infrastructure that that provides actually cascading access to the national services and so that that all also needs to be tested of course so it's kind of very challenging STI uh and that's been built as part of the inspired developments but in the end to end of course conformance and validation yeah please very much to the organisers have so the the the the performance is an important aspect because if you have a central node that actually cascades to web
map services or Web Feature Services which is what what happens there then if every it if you have problems in the services on the data then you have problems in the cascading of integration server so that is an important aspect of it all
ends up there is also what we looked at how do we tested how do we validate that that covers several levels so we start at the bottom really is it's the national data providers are responsible for testing their datasets of the data then the submission of service provider which is often the same the national mapping agencies but it could also be some other organizations they have to test their services and then on central to your way up the cascading services and
also the security access control licensing etc. then it's the the core team that actually manages the central infrastructure and it's good it's quite as a stack of things that need to be done when you do testing so for OGC conformance testing we have the OGC compliance tests that were presented in the in the previous talk and and that is that is used for foot for that part than for service metadata there's some additional which is basically the capabilities of W WFS and there's there's also because we have inspire acquirements there is from JCT inspired geo-portal metadata about the data and that is used for 4 validating the service metadata of services and there are additional test for inspired download services and by abuse services the GML encoding and and that's that's basically where the
software that I will be talking a little bit more in detail later DTF tool but where we have specific inspiring ELF specific tests but will be but it is used and has been developed then we also but another aspect is to monitor the infrastructure and test capacity performance and their this as potenial tools from Finland are are used by and and and that is used to constantly monitor so the the the infrastructure and finally for the data test so that the mapping agency is really do what they what they use it as a the GIS
systems that they use so it can be as really Riyadh Jess can be 1 spatial snowflakes software has some support Anthony but there are also some specific left for data Quality Tools also some source ones that have been the developed and and these are used by the and MCA so in order to get a working infrastructure we have to
to make sure that all of that that
works I think that's a little bit of a theory if we look at it in practice so the assessment of and the work package leader from complicated as is that the basically there's still work to do if we look at what the weather tools are and there's still work to do in but from the validation point of views of the functionality that they support but also the error report and that's 1 of the big big problems is
but when you see failed you need some information more information to understand what the failure is society that you can exit actually fix it up and that's that's 1 thing that we worked on in the DF approach and as for those who have used the OGC CI tests you can do that as a developer you but it's well it's it's often hard but it's doable than to try to understand what what issue has been raised but for users let's say you have a mapping agency and you just want run the tests and you get the result and you don't really understand at all why the failure occurs that that's a problem so that's something that something that that makes the cycle the iteration process very hard to and and and and difficult so that's something that needs to be worked on it's easier for WMS because it's it's an easier service on and it's it's quite hard to achieve get it right for Web Feature Services sense so it's wearing a status where it's possible to actually connect to the services but it's it's it's really and at the starting point and the end and more work will be needed in the future to actually increase the but there conformance off the different services and as we've heard before also in analysis of presentation by the integration of the testing in the early development phases is an essential part right so you have to test early on and then you can use uh it's much easier to fix the errors earlier than to fix them later in the process so I will be talking about lot about is the EDF effectivities that we have done so and maybe as as a start with what we do is EDS we call it a test framework because it doesn't run the tests
themselves but it uses the test engines like soap you and for Web service tests and Bayes expect XML database for but not validating large XML data sets or documents up to several hundred gigabytes that's what we use an XML database
and to to actually do that so
some of the things that we did not in in the in the project the and that was mostly due no-holds-barred developing that the tests for inspire view service test so that's mainly WMS 1 of 3 and and and that a student 0 tests and atom inspire download service to with these tests are they don't test the basic OGC compliance because that's what UGC destined for but just the additional requirements that inspired download service technically users technical guidance at and then I'll have links to the resources later on on the slides and what we
developed our some additional tests they go beyond inspire requirements so when we have links between features and inspire guides doesn't say anything so we added additional rules so that we can we have a reliable structure across the different national data sets and services to actually do but so that you can follow links and half of the references in a consistent way between features in datasets and we also developed some proteins to to so you I we worked a lot but there is still work to be done the on the improved test reports and so that it's easier to understand issues and we created a web application so that is and and we hosted by so for the use within the project and and we also added customization options for that report style sheets of the 2 reports are in XML also stored in an XML database and then rendered by an HTML we also have been doing some other activities of of the left project but at the and I already mentioned the support for large-scale XML data sets using basic we also have direct support for a schema trauma that tests and have made some extensions to to the XML database to give it provided with spatial capabilities because when we tested you know they don't and we also need support for geometric predicates so a geometric validation trimetric predicate predicates and also spatially indexing but is supported for large GML data sets but can and we are also working right now on major update of the core software to into a new major version and have some something about that and on the final slide but it's a little bit wide so I think that probably says ETF so that can didn't make it so the PDF export looks kind of screwed also you never mind so that the pdfs time and the user interacts with with the well that's the web application but also the course it interacts with
that it's connected to the services all the
datasets that we test metadata can falls all also on the dataset in in that part of because it's just basically has some records that you're investigating and then it passes this information to the driver and the test projects are actually executed in the test engines that circuit this currently soap you I basics we test additional data are potentially coverage data then maybe we need some some additional
but the test test engines as well so we we use soap you but this is which has some advantages so I don't know and maybe some of you know it's and then there is an open-source version there is also a biclusters version but we use the open source version of soap you I know and 1 of the big advantages is that you have a graphical user interface to actually develop your tests that that is much better than just writing it somehow in XML I'm and you have to make and mechanisms to rapidly tested the test yourself and and and and then work on that and which also gives us the opportunity because that's the test engine so we can run the tests you don't have to use the TF but we have the plug-in so that you can run it all the TF tests also locally in your glory environment but we also have identified some limitations and part of that is reasons why it's sometimes hard to provide really useful test reports because and there's a limitation on the capabilities when we see the that workflow the processes that we need to go through what we test which is the web services getting data parsing the results using that for to create new test cases and and that is that is sometimes hard to do and basically the the standard test output is not useful so we we have done a lot of work to actually annotate also the test projects so that we can create more helpful for report and so that that's a just a screenshot after web application you just provide URI URL the European test and you can you give it a name so very similar to the OGC but in the user interface that we've seen in the previous presentations and then you get
a better results than what we try to do it is then to actually come up with that with that the messages that provide an idea but also on how to fixed that so here's a response for some that's the WFS and inspire requires that's it implements mean minimum temporal filter and it detects that and reports that information and the and and and and that's just just the whole the whole the results and and look like but
you can find more than information in the and that the source code is all get up and there is also an issue tracker so for for people of having issues identifying issues so that's what we use to actually that but i'm and there's also wiki pages that we need to put more work in order to make the documentation uh is something that we still need to work on and we also have a darker image so most
most of the deployments reuse darker that was that these we we use so that that's an easy an easy way to get get started get going so the and and so on we use also the badges in the and it up about repository and I mentioned we have a test projects so that you know once you have the link here they are also it up also with the issue tracker we use all the issue trackers as well there too clarify issues that we have in the in the project or also did you know on
tests are also used in the Netherlands where they're also using ETS for the for the National STI so the
current current users of of the of of that framework I really did you left but budget we're using it also in our internal continuous integration environment some of our own products and you know what I mentioned they're using it and also but that the German mapping agencies the lender are using it to validate for example the city GML data for of building data in in their production workflows and that's also where we have several hundred gigabytes of the general data that we were
tested 1 thing that's that we're working on right now and is an and content the the idea is to I presented that will be
presented to the about a month said the inspire conference in Barcelona but that will also working on using ETF in the inspired test framework for the ST inspired test framework Indians are validated and we're right now working on several extensions but to what you've seen before so there isn't a new API that provides not only at the user interface success but also excellent REST API was the excellent Jason encodings I have to talk to 2 and the metadata but much richer domain model was abstract test suites etc. following the the ISO and OGC test specifications will support for multilingual reports that etc. so several other things and 1 of the things that we're working on that that won't be ready it this month or when next but that's that's on the plans also to add that the team engine that we've heard before as another 1 the test engine so that we can also run the OGC's CI tests as part of some
of the EGF execution but and yeah
that that's a mock up that's actually from they're not really mark up that was using the and the styling but it it and there's some advanced concepts already in there it didn't run in the in the web application but it was an executable tests we took the inspired data hydrographic data and then we created a test reports so this is very similar to how it will look like what you will not be as seen the months when think the idea is to make data publicly available as well but that's it
thank you very much much any questions from the room could you say something about how you taste tasted services to do like will collect taking each called Bayesian bespeaks a different or how do you handle that case that we so we don't really do the we don't copy of the the OGC scientist so we don't have the basic functionality so we only test the additional rate so that the test the inspire view service test only tests additional requirements that inspire has about the OGC specifications i'm but we don't require any time any any any reference data it like the current OGC that's 1 of the problems that we have also I think from inspire perspective with the current OGC WMS tests because you have to have reference data you don't have to that was the W F S 2 . 0 it can work with any WFS but with W and as you have to have reference data and then it does the image comparison but and so uh yeah there will be limits to what you can do automatically if you don't have a reference dataset because then you can't really it's just that it's vendors who we basically don't test whether all the functions are quite exactly correct but we do some some some certain certain tasks but but that the test will never be that complete that you can can be absolutely sure that every query is executed exactly right because then you would need to have a reference data set so you can compare it so typically we do that in our lives in our unit test locally but but but but that's the idea of these tests is that they were with every service endpoint right so then there are limits to how far you can go deep you can go with this tests somebody else I have a question so you are able to run cited test from EDF maybe in the future and is also an and work in the other direction Canada does OGC Louis have have benefits from the work that you're doing but there was a there is a lot 1 of the discussions that I had was Louis In March was if we it would it would certainly be simpler if we harmonize them or work together on how we express test results right because 1 of the things when we do team engine integration is we can still work with the test reports that you can get from the team engine and to be frank that's not really good so there will be limits of what we can include in their reports and I know that a node you see there's also work on going to improve because that's a that's a known limitations right so so this is the idea is also there to work on and on and on and on a new ways of actually encoding test reports so early as 1 and that's and w 3 C's draft specification and for on and report laying out whatever and it's a it's a representation of these kind of test results and is 1 of the things that we also have an our design report actually for the inspired this framework is that if he does that we could also creates derived URL reports so that we could use the same kind of thing than others people could build on top of that as well and you can access that also why the API so there are certainly opportunities where that also works the other way around this is your chance of claims that doesn't have more presentations this week any shape change questions the so and it may be good to mention is that that recently claim and I were done on a project called proxy which is also an open-source projects and which Anabel's approximately on top of WFS expose it to do search-engines and so on so in know I think tomorrow we have a presentation about and then very welcome to join them and so on and they're looking for next week and will start in 5 minutes again