We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Interview with Felix Cremer

00:00

Formale Metadaten

Titel
Interview with Felix Cremer
Serientitel
Anzahl der Teile
44
Autor
Lizenz
CC-Namensnennung 3.0 Deutschland:
Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen.
Identifikatoren
Herausgeber
Erscheinungsjahr
Sprache
Produzent
Produktionsjahr2023
ProduktionsortWageningen

Inhaltliche Metadaten

Fachgebiet
Genre
Abstract
Felix Cremer has PhD study on time series analysis of hypertemporal Sentinel-1 radar data. He currently works at the Max-Planck-Institute for Biogeochemistry on the development of the JuliaDataCubes ecosystem in the scope of the NFDI4Earth project. The JuliaDataCubes organisation provides easy to use interfaces for the use of multi dimensional raster data. Following his presentation at the OEMC Science Webinar, he was asked a few questions by OEMC’s Working Package 8 OpenGeoHub’s communication experts.
Schlagwörter
KurvenanpassungDimensionsanalyseNeuroinformatikSymboltabelleStrategisches SpielParallele SchnittstelleMultiplikationsoperatorDistributionenraumTaskFormale SpracheGraphPhysikalisches SystemMittelwertFunktionalBesprechung/Interview
MultiplikationsoperatorFunktionalCASE <Informatik>MomentenproblemSatellitensystemAnalysisDigitalisierungMereologieService providerRelativitätstheorieTypentheorieWürfelBitSurjektivitätZeitreihenanalyseSkriptspracheFlächeninhaltFaltungsoperatorNeuronales NetzAbstandOrtsoperatorVariableDifferenteBesprechung/Interview
Computeranimation
Transkript: Englisch(automatisch erzeugt)
My name is Felix Klima, I'm working at the Max Planck Institute for Biogeochemistry, and I presented about how to use the Julia language and the YAXAways.jl package and the
Julia DataQ's ecosystem to handle large roster data. And the idea is that you want to have an efficient system to access your data, but you also want to provide functionality to not think
too much about the dimensions and where, how the dimensions are ordered, but you want to provide like, you want to compute along certain axis, for example, you want to compute the average of time,
and then you don't want to need to know whether the time dimension is the first dimension in the array or whether it's the second or third, and therefore you want to be able to access these data with symbols so that you don't have to think about these things.
So one advantage for YAXAways compared to your XRA is that for certain computational tasks, YAXAways has a much easier distribution or parallelization strategy. So in XRA, it depends on the task and you need to
compute or set up the whole computational graph for all chunks, and in XRA, this is not, in the XRA, that is not necessary, so you need to, you don't need to set up the whole graph
for the parallelization, and therefore you can have certain computations that are possible in XRA that are not possible in XRA. I think there's a very small learning curve to get started because you can easily, as I showed, use your Python or R functionality, so if you have a function that you,
that works in Python or R on a time series, and then you want to just use this function along every time series in your latitude, longitude, time data cube, you can easily use YAXAways to parallelize this, and then you can then think about switching other parts of your
analysis onto Julia, but I think for a first start, you can just use your Python script inside of the YAXAways package.
I think data cubes are going to stay, so I think there are two things that I see that are currently happening which are kind of related but a bit distinct is that, one is that up to now a lot of
data cubes was someone had to provide your data cube and had to make sure that every variable is on the same grid, and so you have this difference between data cube provider and data cube users,
and I think there's more and more movement into combining data cubes that are on different grids so that you can just more easily combine different satellites or satellite data with your analysis data or with other types of data, and the other area that we explore also is going from the
longitude latitude grid to digital global grid systems, so the idea is that on the longitude
latitude grid you have the problem that higher latitudes have a smaller area, so you have to adjust for that and depending on the functions you are applying that is not that easy and the advantage of a DGGS is that you have always the same area always at least roughly the same
neighboring distances and then it's easier to apply for example a convolutional neural networks because the convolution is the same at all positions of the globe which is not the case at the moment or not the case with longitude and latitude grids.