We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

In-situ observation renaissance with istSOS and IoT

00:00

Formale Metadaten

Titel
In-situ observation renaissance with istSOS and IoT
Serientitel
Anzahl der Teile
295
Autor
Mitwirkende
Lizenz
CC-Namensnennung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
From the 80s, while the climate changes issue starts to rise interest, due to financial constraints and the advent of satellite era, monitoring networks begin to decline [1]. Remote sensing with its capability of global monitoring put on a side the direct observation that often requires high investments at local level for installation and maintenance. Nevertheless in-situ monitoring is essential for a large number of actions that requires continuous, long-term, high-frequency, and accurate data as well as to calibrate and validate remote sensing data. With the advent of IoT in situ monitoring is getting back the necessary attention and more people, also in the field of FOSS4G, are starting to work in this field. The IstSOS development team is working since the 2009 to bring in-situ monitoring back to the golden ages fostering the interoperability and the data management best practices. Several projects are here presented to demonstrate how istSOS, IoT and openness can contribute to this goal through a number of applications in the fields of agriculture (ENORASIS), water management (hydromMetTI, FREEWAT, TRESA), risk mitigation (SITGAP, MIARIA), health (ALBIS), development and cooperation (4ONSE).
Schlagwörter
DualitätssatzSatellitensystemRechenwerkRaumauflösungTemporale LogikFibonacci-FolgePunktAuflösung <Mathematik>SoftwarewartungMaßstabSupremum <Mathematik>StandardabweichungRechnernetzDatenverwaltungReelle ZahlProzessautomationDesintegration <Mathematik>Dichte <Physik>ZeitabhängigkeitDistributionenraumWorkstation <Musikinstrument>SoftwareOpen SourceDemoszene <Programmierung>AnalogieschlussWeb SiteLuenberger-BeobachterComputerspielAbgeschlossene MengeZellularer AutomatReverse EngineeringSoftwareentwicklerSatellitensystemPhysikalisches SystemEntscheidungstheoriePunktWorkstation <Musikinstrument>Auflösung <Mathematik>Mathematische LogikMultiplikationsoperatorZentrische StreckungDatenfeldMAPZahlenbereichMetropolitan area networkTypentheorieMereologieRichtungFunktion <Mathematik>ResultanteEinflussgrößeURLWiderspruchsfreiheitStichprobenumfangInformationLie-GruppePaarvergleichMaßerweiterungExakte SequenzUser Generated ContentBeobachtungsstudieProjektive EbenePhysikalismusBenutzerbeteiligungDatenverwaltungComputersicherheitGrenzschichtablösungAnalysisWasserdampftafelFlächeninhaltTaskTelekommunikationSichtenkonzeptSoftwarewartungVektorpotenzialStellenringZweiPixelGamecontrollerAbstraktionsebeneDatenflussMeterAuflösungsvermögenEndliche ModelltheorieInternet der DingeTwitter <Softwareplattform>TermComputeranimation
Reelle ZahlRechnernetzZeitabhängigkeitSatellitensystemRandwertSchätzungGemeinsamer SpeicherDienst <Informatik>TopologieFormale SemantikOpen SourceDistributionenraumImplementierungATMPhysikalisches SystemOffene MengeArchitektur <Informatik>StandardabweichungAnpassung <Mathematik>Desintegration <Mathematik>Dynamisches SystemEndliche ModelltheorieEntscheidungstheorieDatenverwaltungSystemplattformFormale SemantikSelbstrepräsentationSichtenkonzeptInformationStandardabweichungRechenschieberKette <Mathematik>Quick-SortSystemverwaltungSoftwareBitDifferenteKartesische KoordinatenAutomatische HandlungsplanungCASE <Informatik>FunktionalEDV-BeratungSatellitensystemOpen SourceKonfigurationsraumWorkstation <Musikinstrument>Twitter <Softwareplattform>BeobachtungsstudieFlächeninhaltStabilitätstheorie <Logik>DialektZahlenbereichMAPDoS-AttackeGemeinsamer SpeicherStellenringDateiformatBenutzerbeteiligungPunktTypentheorieLuenberger-BeobachterServerElement <Gruppentheorie>Projektive EbeneAdditionHorizontalePhysikalisches SystemEntscheidungstheorieProzess <Informatik>AutorisierungEndliche ModelltheorieDienst <Informatik>MaßerweiterungGruppenoperationEchtzeitsystemEntscheidungsunterstützungssystemp-BlockZeitzoneURLGrundraumRationale ZahlMultiplikationsoperatorMetropolitan area networkKontextbezogenes SystemPlotterFokalpunktSCI <Informatik>VersionsverwaltungComputerspielDemoszene <Programmierung>Produkt <Mathematik>ZweiOffice-PaketWeb SiteElementargeometrieBAYESPhysikalischer EffektGraphische BenutzeroberflächeNatürliche SpracheSondierungART-NetzProtokoll <Datenverarbeitungssystem>WasserdampftafelSchnittmengeFreewareOvalComputeranimation
DatenbankExogene VariableServerFehlermeldungPhysikalisches SystemSupremum <Mathematik>Desintegration <Mathematik>MereologieDatenverwaltungVersionsverwaltungProgrammfehlerDatenparallelitätSoftwaretestTypentheorieCodeRefactoringThreadSpeicherabzugSoftwareentwicklerArchitektur <Informatik>StandardabweichungEnterprise-Resource-PlanningDedekind-SchnittPhysikalisches SystemVorhersagbarkeitProjektive EbeneGlättungServerKartesische KoordinatenSoftwareOffene MengeAutomatische IndexierungZeitzoneStatistikAutorisierungLokales MinimumAuthentifikationLuenberger-BeobachterVersionsverwaltungMereologieTypentheorieSoftwaretestStandardabweichungProtokoll <Datenverarbeitungssystem>IntegralBeobachtungsstudieDatenparallelitätQuick-SortZahlenbereichSondierungCodeAlgorithmische ProgrammierspracheInformationElektronische PublikationREST <Informatik>Workstation <Musikinstrument>SoftwareentwicklerMultiplikationsoperatorSatellitensystemDatenverwaltungTelekommunikationLastErneuerungstheorieProzess <Informatik>Umsetzung <Informatik>Cloud ComputingFunktionalInternet der DingeKonsistenz <Informatik>VerschlingungGraphische BenutzeroberflächeWellenpaketInverser LimesSoftwareentwicklungResultanteAdressraumDatenfeldSchnitt <Mathematik>BenutzerfreundlichkeitGüte der AnpassungDickeAutomatische HandlungsplanungOpen SourceForcingOSAMonster-GruppeSchießverfahrenVerkehrsinformationGruppenoperationGewicht <Ausgleichsrechnung>SchnittmengeIndexberechnungDifferenzkernSystemaufrufComputerspielSichtenkonzeptInhalt <Mathematik>SystemverwaltungComputeranimation
Message-PassingComputeranimation
SoftwaretestServerPi <Zahl>CodeRefactoringStichprobeDatenbankTypentheorieDatenverwaltungPhysikalisches SystemArchitektur <Informatik>Dienst <Informatik>Enterprise-Resource-PlanningSpeicherabzugSoftwareentwicklerProjektive EbeneMessage-PassingTaskSchnittmengeSpannweite <Stochastik>DatenmanagementQuick-SortBenutzerbeteiligungObjekt <Kategorie>Divergente ReiheTypentheorieURLEndliche ModelltheorieElement <Gruppentheorie>SpeicherabzugResultanteMereologieZahlenbereichGraphische BenutzeroberflächeAlgorithmische LerntheorieEntscheidungstheorieGrenzschichtablösungWorkstation <Musikinstrument>Fächer <Mathematik>InformationFrequenzBildschirmmaskeSoftwarewartungBitRoutingART-NetzKonditionszahlKanalkapazitätWärmeausdehnungTermElektronischer ProgrammführerLuenberger-BeobachterSkriptspracheKategorie <Mathematik>Prozess <Informatik>LastVirtuelle MaschineDienst <Informatik>GraphfärbungInternet der DingeCoxeter-GruppeFlächeninhaltBimodulDifferenteAlgorithmusLokales MinimumDatenbankComputeranimation
Transkript: Englisch(automatisch erzeugt)
Thank you. This talk is about ESOS, which is a software that we started developing 10 years ago, now. And we use to collect data from sensor deployed in the fields.
And this title is because I came up with this paper on science. And it was very interesting because it was an analysis on how nowadays Earth observation data are very much used.
And since this kind of data are available, the local monitoring system based on the in-situ sensors start to be put on the side because there are several challenges and several benefits for using Earth observation systems, of course.
And this is true. In fact, the market is a very fast growing new satellite. This is not really updated, but it gives you an idea of the number of satellite missions that were sent in the time from the 72, and the data that are collected.
This type of data are very interesting because they are specially distributed, so you can have data that cover a large amount of area, has a high coverage because generally these stations, these satellites, are rotating around the globe and collecting periodically observations.
And one important thing is that they are maintained by external institutions. It means that you can use the data, but you don't have any cost in maintaining the system and all these kind of things.
The drawback, let's say, is that the temporal and spatial resolution generally is low because even though you have high resolution satellite now, still you have a pixel of meters in the soil, and this is an indirect measure.
The same paper at a certain point was also questioning if some observation from satellites are actually observation. Or are more a result of the models, because the treatment of the data before coming to the desired output is so complicated and so long,
and have so much assumptions and models inside that it is a question of whether it is a direct observation or not. And on the other side, in-situ observations have some good points, and also here we have drawbacks. And the good points from the same papers is about fidelity, resolution, and consistency.
We have very often historical records, also more than 100 and 150 years, of the same observation in the same location. And this gives us a lot of information to extract trends and behavior and understand the phenomena in the long term.
And they generally are point one measurements, so you have a sensor in the location and collect the information, high temporary solutions. And this permits, for example, to understand local high variability phenomena and also local phenomena.
So instead of sampling at higher resolution, if you are able to have a higher sampling frequency, you can detect more precision with more precision the phenomena that you want to like. On the drawback, there is low spatial resolution, because it's high because you're collecting one point,
but you cannot deploy 10,000 sensors in the fields to cover a large area. And the cost of maintenance and management at the local scale is on your shoulder, so it's not an external cost.
And these papers make a comparison of the cost, and they say it's very difficult to understand what is the cost of maintaining a satellite system or monitoring networks. But more or less, you come out with one third of the cost of having a satellite mission is every year for maintaining a monitoring network.
Because also, maintaining is a crucial task in a monitoring network, because you can deploy your sensor, but if you don't take care of your sensor, after a while, the data are trashed, basically. So, in these days, there is a lot of push that can help in sustaining the development of monitoring stations.
And this is mainly thanks to the IoT market, which is quietly growing. There is a large number of projection of the potential market in the future years.
And because of this great market, there is also a great push from the technological point of view to create new sensors and new devices at low cost, and also new communication information. And this, at the end, brings to a new potential for deploying new monitoring networks, but having a lower cost, somehow, in the monitoring.
And this is particularly true for the second level networks. I named these the first level and second level, but who have an idea for a weather station, hydrological monitoring.
We have a federal monitoring network which uses a very high expensive station with very high precision. And their final goal is to have national issues like aviation forecasting, weather, and major river management. So, they have certain goals. But then there are also the second level monitoring network, which is more problem specific.
They require less accuracy, for example, but enough accuracy to solve practical issues. So, we have, for example, in our region, control networks that use the resource activity. So, for example, to control the minimal flow in the river, so to give water accession for drinking water abstraction and things like that.
This is how, in the region where we are located in southern parts of Switzerland, close to Ligeto, the monitoring network for the hydrometeorological stations evolved from the 2000 to 2019.
And you see that there is an increasing number of stations. These in blue are from the national configuration monitoring station, and these are cantonal networks. And this was not a real-time station. So, the trend is try to densify the networks to automate them, and then to integrate and create partnerships.
Because anyway, any additional information, even if it is not of the same quality of the first level networks, they are still useful to densify and to create new informations.
And this plot, I found it very nice, is from a study on the hydrometric network in Switzerland. And you can see that this trend that I was telling at the beginning, there was an increasing number of stations deployed since around the 80s, the 70s, when satellites started, and then they arrived at the stables.
On the other side, from the 70s and the 80s, there is a huge increase of number of local stations from second-level networks. So this means less precise because you have less money at the local level, et cetera,
but you have a lot of problems that you want to see to solve and densify, and also an increase of private networks, private sensors. So there is one issue in this story, is we want to densify, have new stations, but then how to guarantee the sharing of the information.
With satellite, I would say it's quite easy. There is a satellite emission. They decided the format and they put it on the platform, and this is shared with everybody. This is generally how it goes nowadays. But with in-situ observation, you have a huge number of different types of sensors,
different types of formats, different types of variables, et cetera. And then the other thing is that when you pay for your networks, the human nation and your networks, you are not incentivated to share this information because it's an additional cost for you, and there is no obligation to do this.
So it's just a voluntary base to share this information. To try to overcome these issues, in 2010 we started to develop this software based on the Open Standard Sensor Observation Services, which allows to share in the standard format data.
This standard allows to collect data from a data producer's point of view so a new sensor can be registered and can feed the data to the database, to the server. And then from the other side, there is a consumer that can get the data with the standard OGC protocol, get capability, and et cetera.
This is based on XML and standardized data access and semantic representations. This is the software that we have implemented. It's a server-side application that has a graphical user interface,
mostly for administration it is, but still you can see different types of observation. You have your stations, the location, and the data. You can edit, update, and do a lot of things. Maybe I will put a bit of emphasis on the functionalities at the end, but here I want to show how we have used this source in these years
in different types of applications, mainly locally on research projects. We are from an academy, we are a university, so we have used this software for consultancy for the governments, but also in research projects. And there are also other applications in different countries that I didn't include here
because sometimes you don't even know or you don't have the application. The first application is about an early warning system for like floodings in the Locarno area in the southern regions of Ticino. We have different types of informations that are contextual,
like the house and the elements, sensible elements. And then we have observation of the lake levels and the river gauges. And we collect and process all this information through a web processing service that produces hydrological models and forecasts based on the forecast
the future level of the lakes. And this is used on a decision support system by the civil authorities to take actions and to notify interventions and to manage all the exposed things and take decisions and make planings.
This is actually in production. It's more than five, six years it's running. We add also one floodings and they use successfully the system. And so the collection of real-time data is very useful to put as a block in the chain.
Another one was a project to landslide monitoring. The polytechnic of Milan developed these new systems for like a sort of geophone to call the detector early information on possible slides. And the system collected this information and sensor observation service
and delivered also alerts to the administration in case something is not. Another one is an FP7 project which is named Leno Razis, whose aim was to develop a system that collects information from agriculture,
from fields, and based also on weather forecast produced an irrigation plan for the next two days to the farmers so that they can save waters and maximize the field, the yield. And also here a sensor where automatically sending data to SOS
and then used in this processing of this information to predict. Another one is an Horizon 2020, FreeVAT. In this project, SOS has been used as a data source for a number of information to integrate this data within QGIS.
FreeVAT is a QGIS extension to run groundwater modeling basically. And SOS provided a lot of information to calibrate your model, set up your models, and make predictions. This is the first application and the motivation why we started to develop this system.
It's the management of the hydrometeorological network for the local administration, the Canton Tijido in Switzerland. And today the system is running and we have more than 700 sensor registered for the 50 years of data and these are some statistics of the systems.
And this is really working smoothly. Honestly, I have to say this, it's very stable. Other projects we just came up with is integrating quality data from the lakes, so also biological data integrations and sampling from automatic buoy stations.
And in this application, one experiment that we also deployed is a software not only at the server side but also in the sensor side, so directly on the buoy, and then the data were collected directly from the buoy that was serving.
Another one is a recent application is about with biologists to try to monitor the habitat of mosquitoes in the manhole because there is an issue in Switzerland for tiger mosquito,
which is disease, they bring diseases and they are moving toward north. So far we have expected that it's not going over the Alps because it was too cold, but actually it seems that happens
and probably because there is this hotspot where temperature is higher than expected. And the previous study was done for using satellite land temperature surveys. So we created this sensor that used, this is the first type of sensor that we created that uses LoRa as a protocol for communications and we collect this data
and we integrate the data from the LoRaWAN server to the e-source so that we connect also data from LoRa networks. This one is a research for development projects that we have
and it's on the way to finish at the end of the year mainly with Sri Lanka and Pakistan to deploy and create very low cost and fully open weather monitoring stations. It means they use open hardware, use open standard, open software and also open data. They automatically collect data from the fields, they create the statistical report,
they put on CCana for availability and there is the server with fair data availability. And such kind of stations are very low cost and we deployed more than 30 stations in one watershed in Sri Lanka so far.
And I try to make an advertise because we are applying in the next months to a new opportunity to having additional funds for fostering the impact of this project. And we are thinking about creating the training program and creating a sort of community.
So if there is anybody or if you know anybody from a low-income country that would like to participate and we can go there and make some trainings and then they will train the trainers and expand the networks. We are very happy. This is the link where you can find some information.
So we have evolved. I go quickly to the end. In this time we have evolved version 1, 2, 3. We added a number of functionality. We support authentication authorization. We support data aggregation on the server side. So you can download directly the average, the maximum and things like that.
We support time zone. We have a quality index for each observation. It means that when you get an observation you have associated an index that tell you what is the quality of this information. And then you can filter, use all this information to filter. We support MQTT integration directly so you can feed data in the SOS from MQTT files
or on the way back you feed ESOS and ESOS will feed MQTT. And we support virtual procedures which is a sort of procedures on the flight processing of data.
And we are using it for example to convert data from river hate to river discharge for example. We support JSON because together with the standard which is in XML we also have created the RESTful API that are compliant with this XML. We also use the same feature and the same ontologies.
And then of course we have a graphical user interface. But what? We start to test and make some load testing. And we saw that when we have more than 1000 concurrent versions
we are out of performance. So we decided to change it and we go with the renewal of some part of the code and we change it and we have started to develop ESOS 3 which have a great first time results
because you see that with the increasing number of concurrent users it is more stable. And this allows us to address better the Internet of Things issues because in the future we will have more and more sensors so more and more users can concur for the server.
But then at the Phosphorus G Asia last December we met and decided that this was just an improvement of the software, of the code but was not cancelling the limits. So we decided to rewrite completely the code, everything and we started to develop ESOS MUE
which is microservices based, it uses GRFC and is planned to be multi-standard supporting. And so that is more scalable on the cloud platforms. And that's all. We hope to give our small contribution in combating climate changes.
Thank you. Questions? That is awfully quiet. Now I'm sure you still have some.
So we decided to distinguish the different objects that we have in the sensor observations and to create small services for each one. So we will have services for the observation, one service is for feeding the location, one service is for the observed properties and things like that.
So that depending on the needs because there are two different types in the future of usage with IoT. Either you have sensor with very, very high frequency so you have a lot of number of observation that you need to support or maybe you have less frequent observation
but you have much more sensor. And this has a different impact on the database because you are acting on different objects. Making this a sort of separation allows to scale only the part that you really need to scale. And this should cover in the future the capacity to be high ever with the performance
also under high load conditions. And the colours are priorities, priorities, the colour. Yes, this is the priority that we decided. This is the core part and then there are other additional parts like low for example.
We want to add also some modules for the maintenance because in the Foronca project, the one with the local, low cost monitoring stations, there was a big issue to be able to know how the maintenance is done
because for example if you have a station and you want to use the data, knowing that the maintenance is correctly done is one of the first things that you want to know to understand the quality of the data that arrives. I think, last question.
I think I have read it or maybe in the presentation was that based on the received data, you are sending messages or alerts to the owners of station or sensor. How is that managed? Is it just taking some minimum values or value ranges
or is this principle collecting some set of data series and then making decision when they are sending alerts, how that is organised? In the software, which is let's say less implemented
without graphical user interface, we have implemented also a web notification service and a web alert service. So you can create your own script basically and you decide when there is the need to send alerts and then people can register to this event,
to this let's say processing and when it's happening, some things according to your scripts they will be informed, otherwise no. So for example in this guide for the civil protection there are at the given height of the lakes there is information for certain type of exposed elements
to be informed so they can be removed quickly before the lakes goes up and things like that. Very short one. Maybe you know, has anyone tried to use something from machine learning
or something such for prediction? Yeah, the ALBIS project for example, we collect information from the things but then there is a machine learning algorithm that makes the provision for the possibility of expansion of the species,
of the tiger mosquito in different areas. So it's more a processing task. No, no, no. Ladies and gentlemen, thank you.