We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

The template for a Semantic SensorThings API with the GloSIS use case

00:00

Formale Metadaten

Titel
The template for a Semantic SensorThings API with the GloSIS use case
Serientitel
Anzahl der Teile
156
Autor
Lizenz
CC-Namensnennung 3.0 Unported:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Motivation ---------- Spatial Data Infrastructures (SDI) developed for the exchange of environmental has heretofore been greatly shaped by the standards issued by the Open Geospatial Consortium (OGC). Based on the Simple Object Access Protocol (SOAP), services like WMS, WFS, WCS, CSW became digital staples for researchers and administrative bodies alike. In 2017 the Spatial Data on the Web Working Group (SDWWG) questioned the overall approach of the OGC, based on the ageing SOAP technology [@SDWWG2017]. The main issues identified by the SDWWG can be summarised as: - Spatial resources are not identified with URIs. - Modern API frameworks, e.g. OpenAPI, are not being used. - Spatial data are still shared in silos, without links to other resources. - Content indexing by search engines is not facilitated. - Catalogue services only provide access to metadata, not the data. - Data difficult to understand by non-domain-experts. To address these issues the SDWWG proposed a five point strategy inspired on the Five Star Scheme [@BernersLee2006]: - *Linkable*: use stable and discoverable global identifiers. - *Parseable*: use standardised data meta-models such as CSV, XML, RDF, or JSON. - *Understandable*: use well-known, well-documented, vocabularies/schemas. - *Linked*: link to other resources whenever possible. - *Usable*: label data resources with a licence. The work of the SDWWG triggered a transformational shift at the OGC towards specifications based on the OpenAPI. But while convenience of use has been the focus, semantics has been largely unheeded. A Linked Data agenda has not been pursued. However, the OpenAPI opens the door to an informal coupling of OGC services with the Semantic Web, considering the possibility of adopting JSON-LD as syntax to OGC API responses. The introduction of a semantic layer to digital environmental data shared through state-of-the-art OGC APIs is becoming a reality, with great benefits to researchers using or sharing data. This communication lays down a simple SDI set up to serve semantic environmental data through a SensorThings API created with the `glrc` software. A use case is presented with soil data services compliant with the GloSIS web ontology. SensorThings API ---------------- SensorThings API is an OGC standard specifying a unified framework to interconnect Internet of Things resources over the Web [@liang2016ogc]. SensorThings API aims to address both the semantic, as well as syntactic, interoperability. It follows ReST principles [@fielding2002principled], promotes data encoding with JSON, the OASIS OData protocol [@chappell2011introducing] and URL conventions. The SensorThings API is underpinned on a domain model aligned with the ISO/OGC standard Observations & Measurements (O&M) [@Cox2011], targeted at the interchange of observation data of natural phenomena. O&M puts forth the concept of `Observation` has an action performed on a `Feature of Interest` with the goal of measuring a certain `Property` through a specific `Procedure`. SensorThings API mirrors these concepts with `Observation`, `Thing`, `ObservedProperty` and `Sensor`. This character makes of SensorThings API a vehicle for the interoperability of heterogeneous sources of environmental data. `glrc` ------ `grlc` (pronounced "garlic") is a lightweight server that translates SPARQL queries into Linked Data web APIs [@merono2016grlc] compliant with the OpenAPI specification. Its purpose is to enable universal access to Linked Data sources through modern web-based mechanisms, dispensing the use of the SPARQL query language. While losing the flexibility and federative capacities of SPARQL, web APIs present developers with an approachable interface that can be used for the automatic generation of source code. A `glrc` API is constructed from a SPARQL query to which a meta-data section is prepended. This section is declared with a simplified YAML syntax, within a SPARQL comment block, so the query remains valid SPARQL. The meta-data provide basic information for the API set up and most importantly, the SPARQL end-point on which to apply the query. The listing below shows an example. ` #+ endpoint: http://dbpedia.org/sparql PREFIX dbo: <http://dbpedia.org/ontology/> PREFIX dbr: <http://dbpedia.org/resource/> PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> SELECT ?band_label { ?band rdf:type dbo:Band ; dbo:genre dbr:Hard_Rock ; rdfs:label ?band_label . } ORDER BY ?band_label ` A special SPARQL variable formulation is used to map into API parameters. By adding an underscore (_) between the question mark and the variable name, glrc is instructed to create a new API parameter. A prefix separated again with an underscore informs glrc of the parameter type. The ?band_label variable can be expanded to ?_band_label_iri to create a new API parameter of the type IRI.
Schlagwörter
127
GammafunktionGüte der AnpassungComputeranimation
Dienst <Informatik>Formale SemantikExogene VariableDateiformatStandardabweichungSampler <Musikinstrument>TemplateDatenstrukturKontextbezogenes SystemHierarchische StrukturKrümmungsmaßTabelleQuellcodeProtokoll <Datenverarbeitungssystem>REST <Informatik>QuellcodeStandardabweichungLinked DataKontextbezogenes SystemSoftwareWeb ServicesInstantiierungInternet der DingeExogene VariableDateiformatZahlenbereichMathematikKartesische KoordinatenLuenberger-BeobachterEinflussgrößeUser Generated ContentInternetworkingDatenstrukturInformationsspeicherungAdressraumSemantic WebCoxeter-GruppeVerschiebungsoperatorProgrammierparadigmaVerschlingungDienst <Informatik>ComputeranimationBesprechung/InterviewVorlesung/Konferenz
TemplateSystemplattformNatürliche ZahlInstantiierungPrädikat <Logik>Objekt <Kategorie>MultigraphAbfrageGraphKontextbezogenes SystemDiskrete-Elemente-MethodeProgrammierparadigmaBildverstehenFormale SemantikAbfrageSoftwareKontextbezogenes SystemUmwandlungsenthalpieMathematikLuenberger-BeobachterDatenstrukturProjektive EbeneREST <Informatik>CASE <Informatik>GraphOntologie <Wissensverarbeitung>InstantiierungSemantic WebGrundraumGenerator <Informatik>Message-PassingCoxeter-GruppeBitKategorie <Mathematik>Abgeschlossene MengeZeiger <Informatik>Algorithmische ProgrammierspracheW3C-StandardComputeranimationFlussdiagrammVorlesung/Konferenz
Web SiteElementargeometrieComputerunterstützte ÜbersetzungComputeranimation
Transkript: Englisch(automatisch erzeugt)
So my name is Luis, I work at this institution called ISRIK. One of the missions of ISRIK is providing soil data, accurate and good quality soil data to the world. About 8 years ago, the OGC performed a tectonic shift in the way it published its standards by adopting the REST APIs.
Since 2016 they have been publishing standards in this paradigm. A lot of things have changed, most to the better. One of the things that happened is for instance that the response formats have been opened up and include things such as JSON.
And that changes everything. Data provision in the internet with OGC standards became something very, very different. For an institution like ISRIK, this is very important because it opens the door into the semantic web, into linked data.
And an institution like ISRIK, it's really important that our data connects to other sources of soil data and other sources of data that are related to that context. So this set in motion a number of changes in the landscape and one of those is applications like Prez.
And Prez, what it does, it's a piece of software that is being developed in Australia. I see some people smiling, I hope they are. I don't know if they are from Australia. Anyway, so what this software does is that it serves linked data directly from a triple store through OGC compliant services.
Now what is important here in the context of this address is the sensor things API. This was the very first REST API specified by the OGC. It was meant to the Internet of Things. The context that we saw earlier, Daniele, with the monitoring of the algal blooms.
However, this API is aligned with an earlier standard of the OGC called observations and measurements. Which is, for those that don't know what it is, it basically gives a conceptual structure to your data. And this directed at general observation of natural phenomena.
Again, links to the nice presentation by Daniele. This was 2016, 2019. The W3C publishes an ontology called SOSA, which you can see here. Do I have a pointer? Yeah. Which you can see here and in this ontology you have concepts such as observation, procedure, feature of interest, observable property.
Those of you, for instance, that work in the context of Inspire, none of this should be strange. What this means is that with this ontology, if I structure my data, if I create my knowledge graph,
complying with this ontology, I immediately aligned with the sensor things API. And then the next step is the software. And actually this is where things start to become a bit magical. Because you have software just like this GRLC, it's pronounced garlic by the way,
that from a sparkle query can automatically generate a REST API. This is a very simple example and if you have never seen sparkle before, don't get scared. Basically this is a query that returns all the observations in my knowledge graph
and has a few annotations that allows the software to then produce a REST API from this query. And this happens without installing any software, it all happens in GitHub. It's as close as it gets to Magix in this context.
But there is more. Because actually if you know sparkle and you look closely into this query, you can see that most of it is universal. This query with some small changes would apply to any kind of knowledge graph that uses this Sozo ontology. When we see for instance things such as Sozo observation.
This is universal for any kind of knowledge graph that complies to this ontology. And actually within the Mater project, a U-funded project, colleagues in the Poznan Computing Center are developing a tool that already pretty much can inspect a knowledge graph
that complies with this ontology and automatically without any other intervention generates a REST API compliant with the sense of things specification.
So this is my message with this paper and this talk. Is that in the SOAP era serving this kind of data was focused on the technology. We saw the last presentation was a bit like that. Which is the most performant? How do I get my data in? How do I make my data accessible to that software?
And this modern age of APIs is turning data provision into something completely different. Your work can be completely semantic centered. It's all about the structure of your data. And in this case of the semantic web. How do you create a knowledge graph that can be immediately consumed by the software?
I hope this makes sense. Enjoy Phosphor-G and see you later.