Merken
Key Note Lecture: Challenges of atmospheric data assimilation
Automatisierte Medienanalyse
Diese automatischen Videoanalysen setzt das TIBAVPortal ein:
Szenenerkennung — Shot Boundary Detection segmentiert das Video anhand von Bildmerkmalen. Ein daraus erzeugtes visuelles Inhaltsverzeichnis gibt einen schnellen Überblick über den Inhalt des Videos und bietet einen zielgenauen Zugriff.
Texterkennung – Intelligent Character Recognition erfasst, indexiert und macht geschriebene Sprache (zum Beispiel Text auf Folien) durchsuchbar.
Spracherkennung – Speech to Text notiert die gesprochene Sprache im Video in Form eines Transkripts, das durchsuchbar ist.
Bilderkennung – Visual Concept Detection indexiert das Bewegtbild mit fachspezifischen und fächerübergreifenden visuellen Konzepten (zum Beispiel Landschaft, Fassadendetail, technische Zeichnung, Computeranimation oder Vorlesung).
Verschlagwortung – Named Entity Recognition beschreibt die einzelnen Videosegmente mit semantisch verknüpften Sachbegriffen. Synonyme oder Unterbegriffe von eingegebenen Suchbegriffen können dadurch automatisch mitgesucht werden, was die Treffermenge erweitert.
Erkannte Entitäten
Sprachtranskript
00:01
what I would like to do today is actually give you our 1st paper very tough introduction to the data and I and then I will briefly
00:13
talk about different components solid so do a numerical weather prediction
00:18
model the observations and that the data the Malaysian algorithm and I
00:24
will focus today on the ensemble comma filter the algorithm and then transfer to some of the challenges which we are facing with the use of this cycle at in particular on the preservation of the physical and physical properties that they though in a paper simulation what we try to do we try to get from the observations of the relevant information for all the numerical weather prediction model and this observations of course are are very noisy and and we try to
01:02
again filter the relevant observations and 4 out of the observations are we think of it as simply as a vector which is of size 10 to the 5 so 10 to the 5 to 10 to the 7 which attending on in any particular given pattern OK so based on the observations of an hour prior estimate here the reach recall her background we ever try to a produced an estimator of which call analysis of this which we will give to the numerical weather prediction model as the initial conditions so the our analysis and now background again is a vector of size 10 to the 6 to 10 to the 8 but and we use of basically this very simple equation here where he's W is our state of the atmosphere K is a prime here I this vector observations of the order of 10 to the 5 to 10 to the 7 here is our prior so this is our background and this is what we are trying to get the so this is our analysis OK so this age here which appears recall observation operator and the simpler task force from the model space to the observation space so that we can compare our background our forecast of the numerical weather prediction model for a particular time report and they to the observations at that time can't so key in what we are doing here is this gain K or which we usually the common gain K so it that's why I take it in this particular form but for this particular slide but it's only important to remember that the wheat here through this OK we take the information on the accuracy of a background so how much we trust our background in terms of the error covariance matrix of the background field but and how much we trust the observations so this is this error covariance matrix of in can't OK so this is the bottom line equation for the beta simulation and I will now goal a little bit through different components of the atmospheric they innovation system so 1 is the physical model and you physical model in the
03:51
atmosphere of course are the discretized neither Stokes equations honor many possible different different grapes be can be regular linage applauded to read or here as in the newest operational but model of determined by the service like this with the finer resolution in some area of interest OK so all
04:17
the by atmospheric viable which we have in the numerical weather prediction model so pressure temperature wind field specific humidity for example and of course everything which which which we're not resolving these are parametrized in the model so for
04:38
D Pratt Baker simulation reproduce the initial conditions the for the global model or the for the regional model and it's important to have actually mentioned here that the even the global model I are currently coming to the resolution of how about of 10 kilometers and the regional model are actually at the resolution of mind kilometers so they're able now to dissolve very nonlinear so let us as an altered pin this key or a bottomline equation which I as I showed you at the beginning know where we have put in the structure of the of the of the great or of the model or know where we actually take into account whether the model is a regional or the global they but this is not really true so basically where this comes in this in this uncertainty estimates that we need for data simulation so doing this error could variants of de backround field to end the arrow careers sexually of the opposite observational error spot that so for example for the regional model so the errors of it will also come from the global model which is used for its electoral boundary conditions and of course for the from the results scales which goes it to stop here and our atmosphere has all is still in the look at so all of 1 side we have this than 9 weeks so which represented in the numerical weather prediction models in the other side we have the
06:28
observations of the atmosphere and this is the figure which is from our world metrological organization which actually just has all different a way so we are of obtaining the observations for example the aircraft are measuring the wind and temperature and we have satellites here do we have ensued too later again temperature wind have ships measuring course everything is represented only as 1 image but for example for satellites
07:04
currently we have quite a few circling the earth OK so
07:12
we distinguish 2 types of the observations so so what some of them actually give us the observation of the a model viable so for example of the velocity Top pressure or temperature and in that case this observation operator which we need to go to compare the observations to the model output is usually very simple so it's usually an interpolation on however we also have observations as a satellite data and and for these observations we are not observing our model are viable and we need a relatively complicated observation operators regions of funding OK
07:59
so just as figure for Germany here we have insitu observations were the size of the circle represents how much how many data points we have and these are the usually so this is just a velocity and temperature measurements of expression and the humans it's well so if we look at them the 1 particular days of the aircraft measurements this is the distribution which you have horizontally over Germany and if we are looking at particular satellites we will have the observations along the tracks thing so what is an important tool point out here that the of course the instruments which we're using have a different measurement characteristics of different measurements errors but also they have a different spatial and temporal coverage so how do
08:57
we combine these 2 so this far 2
09:01
sources of information on the atmospheric state coming from the observations and coming from the forecast of the numerical weather prediction models so as a mathematician 1 with cost function
09:16
like this and I say OK so I would like to find an estimate which is very much close to my background field within the accuracy of this matrix speed and I would like to say that this estimate was should be close to the observations within the accuracy of this some observation aquariums matrix are OK however that there is a of course the problem here how do we know what is this this matrix be here that's 1 thing and the 2nd thing is once produce this estimate how do I know all how would it it's so this is the the way how data simulation was not primarily in the in the eighties thanks and and then there was in the nineties there was a actually switch to the methods which are based on the samples which actually allow for this covariance matrix to marry time page and I just free you introduce you to this ensemble common filter method now and so on the variations that different so we start from so we write the nonlinear dynamical systems here where our hand is our full model and we like to estimate the number of course is based on the observations that are discrete in time and subject to measurement error so filter is was developed in the sixties I have 4 different problem than the atmospheric data simulation and would persist generally of 2 steps so in the 1st steps where we don't have the observations what we would do all we would stop what we have is the analysis propagated with the full nonlinear model and they will allow for some error in the in the model here OK so this will give us the background it will also give us and the thing is the estimate of the error covariance so how good is this background here but in the in the in case this is a nonlinear model so this would be the linearization of the of this lot and in that time once we have the data here then we would have the following equations which so we would this is the equation from the beginning so we corrected background basal limbs evasions where this is our our common game here and we have as well in the Konrad filter equations we have as well the estimate of the uncertainty of our analysis OK so these equations are actually the right for all of the GAO and their statistics and the for linear model and has not been used in atmospheric data simulation simply because we have been dealing with the vectors which are very large and which means that these covariance matrices are huge so the idea came in the nineties so 94 from heaven song to actually trying deals the common filter equations and simply represent all of the covariances from the ensemble of geophysical of model simulations so basically we will run in parallel numerical weather prediction models in usually we will have a hundred parallel wrongs and we will calculate the mean of them to get our background and you Kightly the sample covariances here came to so so compared to the
13:30
original common filter equation so number 1 is still have this analysis that where we are combining the background and observations to obtain the analysis and the analysis is there a covariant but now we need from these to obtain socalled analysis ensemble so that we can propagate each of these members of the ensemble to get background ensemble which we came from which we can get the background new background and the sample of what is
14:06
also very interesting about the rate would or what it should allow the application of the ensemble come filter for atmospheric applications is the fact that the and the covariance matrix because it's only containing about 100 ensemble members has a very low rank so basically we can write it in the in the reduced form here and we can actually because of this explicitly Kightly relate the NRC their Agrarians and use this square with basically to generate a new ensemble so we will have I would relatively efficient way of computing the analysis and the analysis of ensemble and from the ensemble we will have for the statistics which we need for the Konrad filter equations which are simply of for the mean and on the correct any so under ways of of doing this is actually using the Carla filter equations on each of the background ensemble members and perturbing the observations and this is the 2nd variant of the ensemble
15:29
OK so what is the relation still do this so all demented DVDROM of course if we know this the 1st thing here the ensemble color filter covariant says she has a role ranks so this is that we cannot make inverse here and basically what to what is done it's part of we project with the square root of the background there are covariance into the ensemble space and to the minimization and sample space
15:58
and the big difference is that we have now a different kind of cool variances depending on the time we are doing our status and which so what did the
16:10
would think about this summer common on filter so it naturally represents the cross correlations between different variables because they come out of the numerical weather prediction models they use the variances of fuel dependence of the
16:26
and the computational algorithms not expensive and it's easy to implement on on any great and and what is the problem here of course using up to 100 and ensemble members for a problems with 10 to the 6th of 10 to the 8 degrees of freedom so the call various which we are calculating our do not very accurate so what we do we have to but somehow increase the rank of disco variances is this is done to to the method called localization where we basically cut to 0 the covariances after a certain sits in distance in this increases the low rank and also it's freedom of less accuracy our accurate to call options so so this was the and
17:24
sample come filter an introduction and I would like now to go on to the things which which have been doing recently and this is to locate but what happens with the physical properties of the all of the analysis depending on the data simulation on Britain's entry it's can and here to modulate this a gives a very simple example let us try to estimate a triangle here which is given by this black based on the ensemble which are a perfect copies of this triangle just there in the wrong location case only if we would to lay the mean over this background ensemble but to get is this red line here so basically there's not run does not see a triangle here right but there's 1 property which is preserved and this is the property that the mass or basically in this case just the area underneath the curve is equal to the area to the area in each of these thank now I if I use the ensemble common filter for ones that based on this observations here what I will get for the mean is this red line here which now has some properties of the triangle can see a triangle appearing the ensemble which will be given here in the grave lines but what we see here is that many of these ensemble members actually will have negative values here and now prior had no negative values observations had no negative values and what we try to find should not have made it by also this is for example in the atmospheric think assimilation we we are interested in actually estimating rain of course but if we do this with the ensemble common filter method Allen but we would have to do an additional step after the US simulation where 1 would set this negative values to 0 or in each of the ensemble members so to give to them numerical weather prediction model of physical initial conditions if we don't there's of course this 1 property which was a which would be of conserved by the ensemble common filter algorithms of them the mass which would be perfect if you in the analysis ensemble will be the same as in the background the sample we will laws now here if we set the negative eyes to see data so this is the example of how and of the physical properties can be lost during the data simulation and what we were interested to actually to look at the venom more details is not only what happens with the max but basically what happens with the other conservation properties like for example was of vapor for energy for enstrophy cetra cell in the numerical weather prediction there is a long history of incorporating the most important conservation properties of the continuous system in order to improve the prediction of the nonlinear pro and we have ask ourselves should late assimilation algorithms for all the similar approach and to them to investigate this 1st thing with it we actually look at which properties well recovered which properties are not covered in the ensemble come of filter then we included the ones which are not very customarily covered as constraints in the minimization problem and we have shown the impact on on this on
21:53
the prediction look at the so how can we afford this we use a nonlinear shallow water model which a is supposed to represent action Norton and hemisphere and this model is discretized in such a way that the mass is considered that energies and momentum are conserved and that the enstrophy is conserved for the nondivergent flow and with its socalled of between experiment for which we take the observation for 1 from 1 model run and if you have a different model run from which we will we will take the observation from the Troester from the nature and tried to cover so this is the model
22:52
without any data assimilation so we have started from initial conditions which urges staff in the balance in this period here this is for about 5 days really related to make simulation and then afterward we will have 2 weeks of just pure forecast starting from this initial conditions so just to illustrate the model so during the whole period so mass is perfectly concert we have the energy slightly being lost so basically about 1 per cent of energy being lost the till the end of the period and then we have the enstrophy being lost here about 5 per cent of the 10 so again the answer feet is conserved only for the moment OK so then now these pictures are supposed to illustrate what is happening with the energy and what is happening with the enstrophy different columns of what are observing so this is a shallow water model we can observe your VN page we can observe on the you and me or we can observe on the H 2 different columns are what we are observing rose energy answer feet the black lines here is what we are supposed to get so what is in the nature of man in all of the plots and the different colored lines are the different settings of the data simulation so we always you observe it every fixed point but he was different this localization rate small as the localization radius rank is higher but the correlations are not correct and more right so what happens what we see that the data simulation the some account field I with is actually no matter what we do with this respect to localization is actually able to recover the correct energy after a certain period of time sometimes faster sometimes lower but what we see for enstrophy on the other hand is that it in almost cases it converges to 0 at some constant value which is a higher than the than the nature and and if we observe that a height field only with actually the purchase you get so of course this has implications on the on the spectral and so on kinetic energy spectra and as the the columns again the same so depends on what we are observing so this is the observation of the height feel this is the observation of all the variables and to on he had during the 1st sort of 5 announced cycle so there we start from the worst initial conditions of data simulation has to warm and many does more it actually introduces more of energy into the smallest scales and here is already when the data simulation converge so basically to the are missing values which really which are as deep cook the same as the observational observational lies but still we have the energy into into the supreme small circuits
26:26
OK and this of course has an effect on the prediction that those just look at this green and red lines so this is the period where we do all data simulation on the end of the day the simulation beginning and led the I'm have about the same it's so they're about as accurate then accuracy and we start now the free forecast from this initial conditions and it seems that this is a small amount of energy but actually has or the wrong answer the has the effect in the prediction time we see the difference of the different they across the can so this is the paper in which we have looked
27:11
into this and now I found has something which we are another thing which we did we actually on maybe in some come on filter with the with the constraints so what I'm presenting here is based on on this paper here so again we can think of the ensemble come filter as a minimization problem and here now I take the approach that i for each ensemble member have background ensemble member I want to produce the analysis ensemble member based on the perturbed observations and I do something very simple just putting the constraint into the optimization optimization problems and so as for the ensemble common filter we have to project into the into the ensemble space depending on the rank of the of the local variance then the constraints as well are rejected and if we look at this and in on this a simple example from the beginning this is now the result and we see that all ensemble members are positive and the uncertainty in its core represented Hey so we looked at all so a little bit more in the previous example there really know the anatomy so we looked at world but more complicated problems so that users we modify shallowwater model so this is a one the shallowwater model in which in addition to the velocity and the high there is a brain viable which is introduced and this model is described in this paper by Fortune Prize in 2014 and just to illustrate the vapor simulations set up here so we may make the radar data which means we have observations only when there is rain so red line is the true and as member 1 and member to are in this yellowish yellowish colors and they're they're just to illustrate that the phone viable this model the model it's OK
29:26
so now again now with all the ensemble members of things were 50 for this problem in in yellow and this is the results for the ensemble Carmel filter and this is the results for the Q the ensemble so both methods are not really perfect what this here for the rain right everything is positive here to obtain positive rain we set the negative values of rate to 0 and as we miss some of the clouds here in both methods but what is really different is this in the areas where there is no rain the ensemble come filter produces this but 2 calls for convection which in knowledge which can be seen in all fields and the QP ensemble does not have this artifact and this
30:19
transverse to the sees fit halves which not so interesting but what is really striking is that if we are calculating the mass all 4 of the height field and to the mass of the range feel for the ensemble call filter actually in decreases with time and the for the QB ensemble it actually is constant because we constrain that so we constrain the mass of height and the positivity of grain so this is a constant but this constraint on them on the mass of the heights field has a positive effect on the mass of the of the brain you sparse so now this is a little
31:05
bit slightly different plot so that was the previous was through time now this is a very 1D model so we can go to a relatively large number of ensemble members so we are now going to our 400 to the previous results were at 50 ensemble members and to what we see here that the ensemble come of filter actually with this strategy of setting the negative ions to 0 actually after a certain number of ensemble members does not really the see there's not really use and and the of and then again the mass is again not decreasing with the so just a last look here so I mentioned this constraints which we both imposed on the deposit think we can also impose the quote nonlinear constraints so which we did in this paper here so we did during data simulation constraints on the energy constraints on the answer feed on the constraints on both of of and and these are just to illustrate what happens during the predictions of during this 2 weeks of predictions and then this is now for the to the shallow water model and what we see here that the introduction of this on energy and enstrophy constrained during data simulations on all debate assimilation stops here so this is an initial conditions will go all the same they have the same but if we start to 4 tenths of the error propagates differently in through the complicated then acts OK so I just conclusion so for any physical model we will have to specify the initial conditions and this is but this is crucial out what this new also in data assimilation is that we now due to the use of the ensemble we have incorporated
33:16
to the variations in the who variances in the uncertainty which of naturally incorporated the cross correlations between different viable so this is important because if you're right if I
33:29
have an observation of a velocity I want to have that this velocity can correct my temperature and finally what they wanted to show with this different constraints is that by actually eBay incorporating the constraints on the during the data simulation we're actually bringing of for making data simulation closer to what model of likes to get his initial conditions and this has a positive impact Floyd that to
00:00
Auflösungsvermögen
Subtraktion
Mathematische Modellierung
Gruppe <Mathematik>
Element <Mathematik>
Kategorie <Mathematik>
Transformation <Mathematik>
Stichprobenfehler
Matrizenrechnung
Prognostik
Kovarianzfunktion
Nichtlinearer Operator
Analysis
Gesetz <Physik>
Unendlichkeit
Näherungsverfahren
Maßstab
Erhaltungssatz
Gruppe <Mathematik>
Schätzung
Mathematische Modellierung
Zusammenhängender Graph
00:58
Subtraktion
Kovarianzmatrix
Anfangswertproblem
Gleichungssystem
Auflösung <Mathematik>
Nichtlinearer Operator
Term
Analysis
RaumZeit
Stichprobenfehler
Mathematisches Modell
Mathematische Modellierung
Minimum
Zusammenhängender Graph
Inverses Problem
Gleichungssystem
Analysis
Schätzwert
Mathematische Modellierung
Gruppe <Mathematik>
FiniteElementeMethode
Stichprobenfehler
Gasströmung
Lineare Gleichung
Vektorraum
Physikalisches System
Kovarianzfunktion
Schätzung
Rechenschieber
Forcing
Flächeninhalt
Körper <Physik>
Ordnung <Mathematik>
Explosion <Stochastik>
Aggregatzustand
04:17
Resultante
Zentrische Streckung
Mathematische Modellierung
Messfehler
Mathematisches Modell
Gleichungssystem
Anfangswertproblem
Gradient
Auflösung <Mathematik>
Stichprobenfehler
Mathematisches Modell
Randwert
Druckverlauf
Algebraische Struktur
Umwandlungsenthalpie
Maßstab
Mathematische Modellierung
Zeitrichtung
Körper <Physik>
Direkte numerische Simulation
LES
Gleichungssystem
06:26
Physikalisches System
Wellenpaket
Metrologie
Figurierte Zahl
07:09
Geschwindigkeit
Distributionstheorie
Subtraktion
Mathematische Modellierung
Kreisfläche
Punkt
Messfehler
RaumZeit
Nichtlinearer Operator
Variable
Weg <Topologie>
Arithmetischer Ausdruck
Druckverlauf
Interpolation
Mathematische Modellierung
Strom <Mathematik>
Interpolation
Inverses Problem
Charakteristisches Polynom
Figurierte Zahl
Einflussgröße
Funktion <Mathematik>
08:55
Matrizenrechnung
TVDVerfahren
Kovarianzfunktion
Messfehler
Kovarianzmatrix
Mathematisches Modell
Abgeschlossene Menge
Zahlenbereich
Gleichungssystem
Analysis
Stichprobenfehler
Gerichteter Graph
Physikalisches System
Mathematisches Modell
Dynamisches System
Spieltheorie
Kostenfunktion
Lineare Regression
Gruppe <Mathematik>
Stichprobenumfang
Mathematische Modellierung
Konditionszahl
Gleichungssystem
Analysis
Schätzwert
Statistik
Mathematische Modellierung
Matrizenring
Gruppe <Mathematik>
Stichprobenfehler
Globale Optimierung
Gasströmung
Vektorraum
Kovarianzfunktion
Linearisierung
Arithmetisches Mittel
Rechter Winkel
Mathematikerin
Körper <Physik>
Messprozess
Term
Aggregatzustand
13:28
Kovarianzfunktion
Statistik
Gruppe <Mathematik>
Sterbeziffer
Stichprobenfehler
Kovarianzmatrix
Matrizenrechnung
Zahlenbereich
Vorzeichen <Mathematik>
Gleichungssystem
Kartesische Koordinaten
Bilinearform
Kovarianzfunktion
Rechenbuch
Ähnlichkeitsgeometrie
Analysis
Quadratzahl
Rangstatistik
Gruppe <Mathematik>
Stichprobenumfang
Quadratzahl
Resampling
Rangstatistik
Analysis
15:28
Kovarianzfunktion
Subtraktion
Extremwert
Gruppe <Mathematik>
Transformation <Mathematik>
Inverse
Relativitätstheorie
Kovarianzfunktion
Analysis
Variable
RaumZeit
Funktion <Mathematik>
Gruppe <Mathematik>
Stichprobenumfang
Mereologie
Wurzel <Mathematik>
Kantenfärbung
Varianz
16:07
Algorithmische Zahlentheorie
Subtraktion
Kovarianzfunktion
Gruppe <Mathematik>
Mathematisches Modell
Stellenring
Analysis
Unterraum
Sechs
Freiheitsgrad
Variable
Rangstatistik
Korrelation
Gruppe <Mathematik>
Abstand
Korrelationsfunktion
Varianz
17:19
Impuls
Extrempunkt
Randwert
Natürliche Zahl
Nebenbedingung
Impuls
Gradient
Gesetz <Physik>
Wald <Graphentheorie>
Analysis
Ähnlichkeitsgeometrie
Eins
Divergenz <Vektoranalysis>
Negative Zahl
Gruppendarstellung
Prognoseverfahren
Gruppe <Mathematik>
Punkt
Radius
Analytische Fortsetzung
Gerade
Inklusion <Mathematik>
Extremwert
Gruppe <Mathematik>
Kategorie <Mathematik>
Stichprobenfehler
Ruhmasse
Prognostik
Dreieck
Gesetz <Physik>
Arithmetisches Mittel
Näherungsverfahren
Erhaltungssatz
Ordnung <Mathematik>
Nebenbedingung
Wasserdampftafel
Physikalismus
Gruppenoperation
Anfangswertproblem
Mathematisches Modell
Physikalisches System
Erhaltungssatz
Kugel
Stichprobenumfang
Mathematische Modellierung
Stochastische Abhängigkeit
Analysis
Mathematische Modellierung
Kurve
Gasströmung
Physikalisches System
Energiedichte
Flächeninhalt
Ruhmasse
Energiedichte
Dampf
22:49
Subtraktion
Punkt
Sterbeziffer
Momentenproblem
Wasserdampftafel
Natürliche Zahl
Stab
Gruppenoperation
Anfangswertproblem
Variable
Prognoseverfahren
Rangstatistik
Total <Mathematik>
Mathematische Modellierung
Korrelationsfunktion
Gerade
Zentrische Streckung
Radius
Mathematische Modellierung
Zeitabhängigkeit
Stellenring
Prognostik
Ruhmasse
Frequenz
Energiedichte
Menge
Sortierte Logik
Kinetische Energie
Dreiecksfreier Graph
Energiedichte
Ruhmasse
Körper <Physik>
LieGruppe
27:11
Geschwindigkeit
Resultante
Nebenbedingung
Sterbeziffer
Minimierung
Nebenbedingung
Wald <Graphentheorie>
Analysis
RaumZeit
Negative Zahl
KappaKoeffizient
Rangstatistik
Gruppe <Mathematik>
Mathematische Modellierung
Varianz
Gerade
Analysis
Addition
Mathematische Modellierung
Extremwert
Gruppe <Mathematik>
Stichprobenfehler
Optimierungsproblem
Kovarianzfunktion
Kombinatorische Gruppentheorie
Funktion <Mathematik>
Flächeninhalt
Rechter Winkel
Erhaltungssatz
Ruhmasse
Körper <Physik>
Dampf
Rangstatistik
Streuungsdiagramm
30:16
Resultante
Nebenbedingung
Ortsoperator
Wasserdampftafel
Gruppenoperation
Zahlenbereich
Anfangswertproblem
Wald <Graphentheorie>
Stichprobenfehler
Analysis
Mathematisches Modell
Prognostik
Korrelation
Gruppe <Mathematik>
Eigentliche Abbildung
Mathematische Modellierung
Total <Mathematik>
Konditionszahl
Deterministischer Prozess
Mathematische Modellierung
Gruppe <Mathematik>
Prognostik
Ruhmasse
Unabhängige Menge
Transversalschwingung
Energiedichte
Freie Gruppe
Parametersystem
Energiedichte
Strategisches Spiel
Ruhmasse
Körper <Physik>
33:14
Geschwindigkeit
TVDVerfahren
Nebenbedingung
Subtraktion
Mathematische Modellierung
Ortsoperator
Gruppe <Mathematik>
Nebenbedingung
Prognostik
Anfangswertproblem
Analysis
Gesetz <Physik>
Erhaltungssatz
Korrelation
Mathematische Modellierung
Eigentliche Abbildung
Konditionszahl
Korrelationsfunktion
Metadaten
Formale Metadaten
Titel  Key Note Lecture: Challenges of atmospheric data assimilation 
Serientitel  The Leibniz "Mathematical Modeling and Simulation" (MMS) Days 2018 
Autor 
JanjicPfander, Tijana

Lizenz 
CCNamensnennung 3.0 Deutschland: Sie dürfen das Werk bzw. den Inhalt zu jedem legalen Zweck nutzen, verändern und in unveränderter oder veränderter Form vervielfältigen, verbreiten und öffentlich zugänglich machen, sofern Sie den Namen des Autors/Rechteinhabers in der von ihm festgelegten Weise nennen. 
DOI  10.5446/35427 
Herausgeber  WeierstraßInstitut für Angewandte Analysis und Stochastik (WIAS), Technische Informationsbibliothek (TIB) 
Erscheinungsjahr  2018 
Sprache  Englisch 
Inhaltliche Metadaten
Fachgebiet  Informatik, Mathematik 
Abstract  The initial state for atmospheric numerical models is produced by combining available observational data with a short range model simulation using a data assimilation algorithm. This gives us an initial state from which we can run a deterministic model to produce predictions of the future. The main goal of data assimilation is to produce the best analysis (best initial condition) for the numerical model; that is, the estimate that gives the best prediction for the time scales that we are focusing on. This is a challenging problem since highresolution numerical models of the atmosphere in use today resolve highly nonlinear dynamics and physics, making them in short runs very sensitive to proper initial and boundary conditions. In this talk, we present the mechanisms of the data assimilation algorithms.. We focus on the ensemble Kalman filter algorithm to estimate the atmospheric state as well as its necessary modifications for our application. Most of the current algorithms used in practice for combining data and previous model forecasts (prior estimates) use Gaussian error assumptions. These assumptions are not appropriate for nonlinear dynamics, since only in the case of linear dynamics will Gaussian errors remain Gaussian in time, not in case of estimating variables that need to be positive or in certain ranges as, for example, rain. Consequently, data assimilation for numerical weather prediction models that resolve many scales of motion and for observations of higher temporal/spatial density/resolution requires reevaluating and improving the methodology that is currently inherited from less nonlinear applications. We argue that relaxing underlying assumptions of the data assimilation algorithms might be possible by improving the link between the data assimilation and the model. For example, the stronger connection can be established by constraining the analysis with imposing conservation laws and other physical constraints. Applications are illustrated on the convective scale data assimilation example. 