On the Prospects and Challenges of Weather and Climate Modeling at ConvectionResolving Resolution
Video in TIB AVPortal:
On the Prospects and Challenges of Weather and Climate Modeling at ConvectionResolving Resolution
Formal Metadata
Title 
On the Prospects and Challenges of Weather and Climate Modeling at ConvectionResolving Resolution

Title of Series  
Author 

License 
CC Attribution 4.0 International:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor. 
Identifiers 

Publisher 

Release Date 
2017

Language 
English

Content Metadata
Subject Area  
Abstract 
The representation of thunderstorms (deep convection) and rain showers in climate models represents a major challenge, as this process is usually approximated with semiempirical parameterizations due to the lack of appropriate computational resolution. Climate simulations using kilometerscale horizontal resolution allow explicitly resolving deep convection and thus allow for an improved representation of the water cycle. We present a set of such simulations covering Europe and global computational domains. Finally, we discuss challenges and prospects climate modelers face on heterogeneous supercomputers architectures.

Keywords  Science 
Related Material
The following resource is accompanying material for the video
Video is cited by the following resource
00:00
Slide rule
Complex (psychology)
Group action
View (database)
Multiplication sign
Interface (computing)
Shared memory
Computer
Bit
Water vapor
Voltmeter
Computer simulation
Wave packet
Computer science
Authorization
Summierbarkeit
Family
Pole (complex analysis)
01:58
Axiom of choice
Surface
Slide rule
Multiplication sign
Plotter
Computer simulation
Axonometric projection
Hypothesis
Mathematics
Sign (mathematics)
Software testing
Nichtlineares Gleichungssystem
Multiplication
Physical system
Predictability
Noise (electronics)
Projective plane
Computer simulation
Bit
Instance (computer science)
Logic synthesis
Mathematics
System programming
Right angle
Physical system
Pole (complex analysis)
04:35
Ocean current
Surface
Sensitivity analysis
Dependent and independent variables
Range (statistics)
Computer simulation
Computer font
Mereology
Axonometric projection
Graph coloring
Number
Mathematics
Representation (politics)
Sensitivity analysis
Point cloud
Mobile Web
Scale (map)
Computer font
Dependent and independent variables
Standard deviation
Electric generator
Slide rule
Projective plane
Computer simulation
Bit
Line (geometry)
Degree (graph theory)
Arithmetic mean
Point cloud
Simulation
07:01
Ocean current
Water vapor
Computer simulation
Hypothesis
Mathematics
Different (Kate Ryan album)
Representation (politics)
Position operator
Social class
Physical system
Point cloud
Electric generator
Cellular automaton
Feedback
Sound effect
Computer simulation
Bit
Instance (computer science)
System call
Degree (graph theory)
Mathematics
Arithmetic mean
Thermal radiation
Universe (mathematics)
Point cloud
Marginal distribution
Engineering physics
Pole (complex analysis)
Resultant
10:21
Point (geometry)
State of matter
Image resolution
Computer simulation
Mass
Computer simulation
Process (computing)
Momentum
Sensitivity analysis
Simulation
Pole (complex analysis)
Pole (complex analysis)
Point cloud
11:35
Surface
Surface
Gradient
Mass
Drop (liquid)
Mass
Computer simulation
Food energy
Process (computing)
Phase transition
Vertex (graph theory)
Point cloud
Momentum
Pole (complex analysis)
Pole (complex analysis)
12:39
Point (geometry)
State of matter
Length
Multiplication sign
Mereology
Dimensional analysis
Number
Inference
Moore's law
Differential equation
Nichtlineares Gleichungssystem
Predictability
Scaling (geometry)
Point (geometry)
Physical law
Computer simulation
Measurement
Distance
Mathematics
Process (computing)
Grid Computing
Numerical analysis
Nichtlineares Gleichungssystem
Right angle
Physical system
Resultant
14:40
Module (mathematics)
Scaling (geometry)
Length
Surface
Computer simulation
Set (mathematics)
Physicalism
Similarity (geometry)
Bit
Instance (computer science)
Drop (liquid)
Computer simulation
Variable (mathematics)
Medical imaging
Latent heat
Digital photography
Process (computing)
Thermal radiation
Process (computing)
Nichtlineares Gleichungssystem
Turbulence
Physical system
15:47
OverlayNetz
Point (geometry)
Dialect
Information
Computergenerated imagery
Computer simulation
Medical imaging
Particle system
Process (computing)
Personal digital assistant
Normal (geometry)
Point cloud
Parametrische Erregung
Routing
Form (programming)
Task (computing)
Spacetime
17:19
Point (geometry)
Statistics
Closed set
Multiplication sign
Point (geometry)
Commutator
Computer simulation
Computer simulation
Revision control
Prototype
Process (computing)
Frequency
Befehlsprozessor
Revision control
Point cloud
Vertex (graph theory)
Prototype
Graphics processing unit
18:44
Point (geometry)
Metre
Personal digital assistant
Multiplication sign
Representation (politics)
Point cloud
Sound effect
Bit
Line (geometry)
Graph coloring
19:52
Mathematics
Bit rate
Angle
Chemical equation
Computer simulation
Bit
Instance (computer science)
Musical ensemble
Line (geometry)
Connected space
Spacetime
20:56
Area
State observer
Plotter
Multiplication sign
Computergenerated imagery
Computer simulation
Bit
Instance (computer science)
Graph coloring
Number
Medical imaging
Equations of motion
Musical ensemble
Parametrische Erregung
Figurate number
23:08
Divisor
Extreme programming
Computergenerated imagery
Range (statistics)
Computer simulation
Representation (politics)
Extreme programming
Representation (politics)
Mereology
Supercomputer
Connected space
24:03
Point (geometry)
Intel
SocketSchnittstelle
State of matter
Multiplication sign
Execution unit
Virtual machine
Computer simulation
Mereology
Computer
FLOPS
Supercomputer
Power (physics)
Number
Readonly memory
Semiconductor memory
Hybrid computer
Band matrix
Operator (mathematics)
Videoconferencing
Vertex (graph theory)
Sensitivity analysis
Point cloud
Scale (map)
Operations research
Graphics processing unit
Cellular automaton
Electronic mailing list
Computer simulation
Instance (computer science)
FLOPS
Virtual machine
Band matrix
Befehlsprozessor
Software
Radiofrequency identification
Simulation
26:31
Point (geometry)
Operations research
Information
State of matter
Multiplication sign
LaplaceOperator
Counting
FLOPS
Computer simulation
Mereology
FLOPS
Band matrix
Readonly memory
Personal digital assistant
Military operation
Band matrix
Operator (mathematics)
Core dump
Right angle
Traffic reporting
27:56
Point (geometry)
Asynchronous Transfer Mode
Algorithm
Line (geometry)
Multiplication sign
Plotter
Direction (geometry)
Computer simulation
Mereology
FLOPS
Front and back ends
Programmer (hardware)
Readonly memory
Semiconductor memory
Military operation
Band matrix
Befehlsprozessor
Operator (mathematics)
Computer hardware
Musical ensemble
Energy level
Aerodynamics
Nichtlineares Gleichungssystem
Game theory
Mathematical optimization
Computer architecture
Graphics processing unit
Curve
Cliquewidth
LaplaceOperator
Computer simulation
Physicalism
FLOPS
Instance (computer science)
Machine code
Line (geometry)
Portable communications device
Supercomputer
Compiler
Band matrix
Loop (music)
Order (biology)
Revision control
Physics
Problemorientierte Programmiersprache
Communications protocol
Abstraction
30:19
Point (geometry)
Domain name
Scale (map)
Sensitivity analysis
Dialect
Scaling (geometry)
Divisor
Multiplication sign
Range (statistics)
Computer simulation
Computer simulation
Cartesian coordinate system
Measurement
Scalability
Number
Personal digital assistant
Order (biology)
Right angle
Sensitivity analysis
Simulation
Mathematical optimization
Point cloud
32:14
Domain name
Scale (map)
Scaling (geometry)
Scaling (geometry)
Multiplication sign
Computergenerated imagery
Point (geometry)
Computer simulation
Mereology
Scalability
Entire function
Number
Helmholtz decomposition
Codec
Vertex (graph theory)
Implementation
Simulation
Domain name
Arc (geometry)
Extension (kinesiology)
33:03
Asynchronous Transfer Mode
Dynamical system
Scaling (geometry)
Multiplication sign
Image resolution
Computergenerated imagery
Virtual machine
Number
Vertex (graph theory)
Implementation
Extrapolation
Domain name
Scale (map)
Logical constant
Scaling (geometry)
Point (geometry)
Computer simulation
Instance (computer science)
Cartesian coordinate system
Measurement
Hypothesis
Type theory
Numeral (linguistics)
Helmholtz decomposition
Domain name
34:05
Surface
Group action
Direction (geometry)
Disintegration
Computer simulation
Distance
Revision control
Frequency
Optical disc drive
Core dump
Ideal (ethics)
Software testing
Implementation
Graphics processing unit
Point cloud
Domain name
Surface
Point (geometry)
Computer simulation
Instance (computer science)
Evolute
Degree (graph theory)
Singleprecision floatingpoint format
Personal digital assistant
Mathematical singularity
Pole (complex analysis)
Stability theory
36:31
Group action
Divisor
Multiplication sign
Computer simulation
Food energy
Computer
Power (physics)
Number
Mathematics
Benchmark
Program slicing
Error message
Compilation album
Algorithm
Computer simulation
Food energy
Instance (computer science)
Benchmark
Type theory
Commitment scheme
Personal digital assistant
Mixed reality
Computer cluster
Window
Spacetime
39:54
Satellite
State observer
Sensitivity analysis
Group action
State of matter
Multiplication sign
View (database)
1 (number)
Set (mathematics)
Computer simulation
Mereology
Replication (computing)
Food energy
FLOPS
Data transmission
Fraction (mathematics)
Medical imaging
Different (Kate Ryan album)
Semiconductor memory
Hypermedia
Matrix (mathematics)
Series (mathematics)
Point cloud
Source code
Algorithm
Cliquewidth
Floating point
Moment (mathematics)
Stress (mechanics)
Computer simulation
Physicalism
Bit
Food energy
Instance (computer science)
Term (mathematics)
Computer
Measurement
Band matrix
Process (computing)
Order (biology)
System programming
Computer science
Right angle
Figurate number
Quicksort
Representation (politics)
Simulation
Point (geometry)
Implementation
Divisor
Image resolution
Characteristic polynomial
Heat transfer
Field (computer science)
Number
Latent heat
Population density
Readonly memory
Term (mathematics)
Band matrix
Operator (mathematics)
Musical ensemble
Energy level
MiniDisc
Computer architecture
Operations research
Pairwise comparison
Scaling (geometry)
Information
Chemical equation
Expression
Counting
Line (geometry)
Cartesian coordinate system
Vector potential
Radius
Personal digital assistant
Point cloud
Codec
Collision
48:26
NPhard
Dynamical system
Direction (geometry)
Decision theory
Multiplication sign
Computer simulation
Mereology
Disk readandwrite head
Web 2.0
Mathematics
Matrix (mathematics)
Point cloud
Source code
Structural load
Computer simulation
Maxima and minima
Instance (computer science)
Type theory
Process (computing)
Order (biology)
System programming
Right angle
Summierbarkeit
Representation (politics)
Simulation
Resultant
Metre
Service (economics)
Virtual machine
Similarity (geometry)
Branch (computer science)
Student's ttest
Heat transfer
Streaming media
Number
Supercomputer
Frequency
String (computer science)
Operator (mathematics)
Energy level
Binary multiplier
Mathematical optimization
Computer architecture
Scale (map)
Machine code
Compiler
Personal digital assistant
Universe (mathematics)
Library (computing)
53:44
State of matter
Multiplication sign
1 (number)
Insertion loss
Voltmeter
Computer simulation
Disk readandwrite head
Food energy
Semiconductor memory
Different (Kate Ryan album)
Monster group
Physical system
Graphics processing unit
Point cloud
Source code
Observational study
Software developer
Electronic mailing list
Computer simulation
Bit
Inertialsystem
Digital object identifier
Portable communications device
Flow separation
Arithmetic mean
System programming
MiniDisc
Parametrische Erregung
Representation (politics)
Simulation
Resultant
Metre
Service (economics)
Virtual machine
Branch (computer science)
Plateau's problem
Field (computer science)
Number
Supercomputer
Revision control
Architecture
Performance appraisal
Operator (mathematics)
Implementation
Computer architecture
Scale (map)
Counting
Line (geometry)
Machine code
Limit (category theory)
Calculation
Strategy game
Turbulence
59:09
Dynamical system
Group action
State of matter
Length
Multiplication sign
Execution unit
Median
Numbering scheme
Computer simulation
Food energy
Rule of inference
Benchmark
Term (mathematics)
Different (Kate Ryan album)
Error message
Scaling (geometry)
Forcing (mathematics)
Computer simulation
Bit
Food energy
System call
Demoscene
Chaos theory
Numeral (linguistics)
Hypermedia
Telecommunication
System programming
Summierbarkeit
Parametrische Erregung
Pressure
Local ring
Resultant
Spacetime
00:03
the thank you and I think the violence and the title there's a very wrong sentence because it's on the prospects and challenges of water and climate modeling that conviction resolving resolutions who to my opinion them is trying to make complex thoughts and views a little bit more easy to understand them please give me the warm welcome the poles and this did not have passed from my side fuel it's a really terrifying to stand here so excuse me I'm extremely nervous but you've probably heard this a few times now but before I begin I would like to thank uh a lot of people so 1st of all the old 1 the volunteers that make this conference possible it's been amazing so far I've seen so many interesting talks that have so many interesting discussions and 2nd of all I would like to thank my my authors for this talk and they're mighty work with them on a daily basis but for this talk in particular would like to thank Christoph share all and harness forked who where contributes slide which allows me allowed me to stay a few days in the last week with my family instead of preparing slides so to align my name is david I'm in Oxford scientist by training but my current you work at ETH in Zurich and stare in a group which is interested in that and this sum climate over Europe and the work on the interface between atmospheric science of climate modeling and computational science and computer science so on this basically
02:01
the summary slide from the the 1st half of the the last talk so the climate system is warming that is unequivocal and the human if influence on the climate system is clear um so now that we've established the climate is warming in the last talk we it's time to go a bit beyond and and look what's coming in the future so we know the climate system is warming and we're very we're interested in how fair future world might look like but then at the same time there is this the same as saying that making prediction it's hard to especially about the future and 1 way to do that is by using numerical models so trying to put the IR system into equations and their use climate models to make projections but we we make our models to answered very specific questions or a specific test for a specific hypothesis and we don't and what we don't want to create and the Earth in a virtual world so this is kind of important to remember so all models are wrong but some models are useful especially if you have the right questions so this is another summary slide
03:25
from uh from the lost talks this very famous IPCC slide yeah from the summary for policymakers essentially tells us that we have a choice depending on the amount of carbon emissions we can either live in a world which looks which has a temperature change by top on the left or we can keep a businessasusual scenario and we can live in a world that looks to be more like the board and on the right and so this is a plot of fast surface temperature change and it contains a lot of very useful uh robust features that have been addressed and and investigated in the past for instance we see that the continents are warming faster than the ocean we can see that the poles are warming faster than the tropics and everywhere we we have this is these strange bloodstock black is where all the models of contributed to the spot agree sign and where the signal to noise is very high so another way to look at the it
04:36
is by looking at the global mean temperature change so on on the bottom panel you see back past observations and in a big panel we again look at the scenarios so we have a businessasusual scenario on the top and we have a mitigation scenario on the on the bottom This will be refered to degree world I showed you before and in the straight I red line you can see the mean change predicted by a have a large number of models I think is around 40 and the color shading indicates the range of these mobile spam so this is um this is kind of the uncertainty range that our models project and a lot of effort has been made in the past to understand this range to try to um you where French comes from and am of course we would like we would also like to to reduce that uncertainty range so we know a bit better where we're heading and is now in this talk that I want look at some ideas how we can uh we we think we can reduce this range by improving our climate models because we think that someone that a large part of these uncertainties comes this due to uncertainties in the response of clouds and if we really improved the that the representation of clouds we might be able to reproduce these uncertainties so kind of an outline of an
06:16
outline I 1st would like to talk about Clouds and Climate sensitivity than I have to give you a very brief introduction to hadron climate models the work in in a nutshell then I would like to talk a bit about New climate models which are able to mitigate the 2 to um but to mitigate to and in work around the some of the uncertainties that are at in the current generation model of mobile standard like to talk a bit about challenges in in computing burden climate so computing these new models and finally I wanna give a short outlook on what we can expect and then lost in the next 5 years part of the next couple of years so let's begin with the clouds are you
07:03
remember that told you develop the uncertainties comes from the representation of clouds and class can react to warming in in in the large uh in various ways and here I have selected a few so the clouds can either contribute to the warming by that would be a positive feedback so as the temperature gets warmer Dave contribution makes the climate system you warmer but they can also have an opposite effect and they can make their they can moderate the warning that the mean of all that the climate will get caller but that the increase will be a little bit smaller so for instance in a future world we could have more high clouds and that would then contribute to that uh positive feedback we could have more than we did that moral clouds which you would have the opposite effect because um in clouds could get higher which would again have a positive feedback they could get less I more watery which would have a negative in the that feedback or dated that change the position of for instance if the most for closer to towards the poles this could again have a positive feedback so you can see that we have a whole slew of of ways the class can react and here I've chosen just a few so there are more of them and our problem been in the current generation of climate models is that clouds are not explicitly results but we parametrize them so it means we use um physicallybased semiempirical models to represent them in models and climate models and a way to illustrate
08:45
this this so we have we have a model world and we can change the world and here and 1 way you can change it is you can try to simplify it to make it easier to understand and here what you also have sperm district that 1st of all the continents so now you have only universe consisting of water and the notion but nevertheless these planets the features of the very similar to draw our climate system so we can see how we get there is uh and intertropical convergence the their tropical convergence on their Hadley cells they're extratropical cyclones but in a sense that the system we have to investigate is much easier to understand and now you have this this idealized world and you can run it for a couple of years and then you can wanted her system by 4 degrees and it will be up here and you can write again for a couple of years and then you can take the difference between these 2 simulations and this is what is plotted here and this public for for data for different models and you can see that these models for different models here in Britain in the vertical they react very differently so the change in cloud radiative effects the patterns of different and the change in precipitation also looks different so and we our hypothesis is that these differences are mostly because calls clouds are not represented explicitly and that's the the parameterizations involved here and have a substantial margins
10:23
so to to to explain quickly what parameterization is I want to
10:28
give a short introduction into what climate models are so what does a climate model do and it has 2 important jobs the 1st job is it distributes he horizontally so we have differential heating around I did that that the tropics that the earth is warmer than at the poles and the 1st job is to kind of make these 2 states and the way it does this is the
10:56
following so this is our our high resolution simulation at that 30 kilometers which was the nation is by by Polish temporal and you can this is basically now how the climate model looks like from the top so this is not the stuff picture this is entirely were world and you can see that we have many of the features to so here friends in the tropic we have that the strong convection and that understood thunderstorms going on here yet extratropics we have that extratropical cyclones so that the low pressure system coming forward afterwards Europe you many of you know these very well at some point we will see in tropical
11:36
storms so here would be a tropical storm and it is a lot kind of these these eddies then needs and the that that the gradient between the tropics and the poles so this is 1 drop yeah the
11:54
job is to mix in the vertical you know this is a similar but from the last so in a greenhouse atmosphere the atmosphere at Earth warms your surface and somehow a lot of the energy from the surface has to get back in the atmosphere and this usually this work goes through that uh that what phase so through clouds but still these these clouds are very badly represented so we have 1 uh at his friend heat energy and mass transport which has been represented very well since the seventies so the sermon not being smart smothering skinned people that an awesome job but only on the vertical transport there's still a bit unsure so some equations for
12:42
the climate model laws it basically solves a number of coupled differential equations that doesn't matter too much what this is about here and topic inference and see the NavierStokes equations some might recognize this law here since the question of state or the ideal gas law and this is from that atmospheric part of our climate model this is the time of course small which I'm using it is amongst other things also used by the German Weather Service to take that who and where the prediction and now unfortunately so we will not solve these these these equations so we get the the nice motions you saw in the animation right and sadly we don't know 1 of the analytic solution to these equations so the only way is to solve them numerically using numerical methods on a great and this is very very expensive so we have a
13:37
numerical measure surrounding during in in 3 dimensions and we solve a set of about 5 equations at each grid point and then once we have that we have so we solved the equations everywhere we have a new state and from that from that we can then progress in time so it is kind of the classical timestepping approach so now depending of the con the computational power you have available you can vary the number of grid points before for the problem we can vary the resolution of the great and now you will have processes will which will be very well represented on that great and you have processes which are either not representable or which have a length scale that is too small to be represented on the grid so typical global climate simulations today they have a grid spacing between 125 kilometers so imagine you have to do with but with points here and of a process that falls right in between that hard to capture by the grid so we have result processes
14:43
on the so called subgridscale processes and from there we have a lot of so all we have for instance uh chemistry umpca might physics radiation and sensible and latent heat fluxes which are very hard to represent that in on a great and then we have some processes which we add as as this new model and modules reference and surface model ocean model which also has a similar kind of set of equations and then we have a lot of processes stock of length scales which are too small to be represented on the great which is for instance here the deep convection at the shallow convection so that the big thunderstorms in the tropics or even in the extratropics next that would like a bid to look at at that in a bit more detail so here we have a very
15:32
very wonderful a photograph of a thunderstorm uh over Lake Constance this was in april and 2006 and it's a very beautiful image because we can see a lot of the processes that are going on so for instance drops
15:49
here we have adopted by downdraughts regions over the that the air parcels rise very fast with up to 10 meters per 2nd over here we have what we what we know as handle region so there's a lot of ice and there and here directly below of something very interesting things as they would have been tied to meet your was falling out of the clouds and informing discussed forms here the ground the fact about reaction can see them in this images because 1st at this case here they consist mostly of ice particles normal rain and ice particles makes is that makes them accountable by camera so now imagine we want to we want to resolve this in in a model so we overlay
16:36
great and the great now has a grid spacing of 10 kilometers so we have information a disparate points right and that these good points and now what about physical parametrization tries to it tries to capture all the processes that are going here on here in between and this is an extremely difficult task and it has been worked on since the seventies and some of the most brilliant scientists I I know I have worked on this problem and we still do today have of the uncertainties that have shown you so we're proposing to take another rout the maybe a simple rout we propose to simply
17:22
increase the resolution of the grid because now we have a few points representing the clouds and we actually can start to resolve the motions associated with these processes we can go even a bit further and then very well capture many of the features of this cloud so we've done this and
17:47
we as it said we use the close northern climate model and we've done this for assimilation at all over Europe so we have done 3 simulation 1 at 50 kilometer 1 of 12 commuter and monitored kilometre and we were able to do it this 2 kilometres simulation because we use a very special version of course more i which is a version which is better able to make use of the capability of of GPU accelerators this was an effort to uh let's by by only before the and how can we use this model to to run a simulation which has about 15 hundred 10 1560 grid points and we simulated the period of about 10 years on 144 notes we can run 1 year of simulation in about 2 and a half base of fetal time because of just GPU prototype so now show you all of statistics this is boring so I'll
18:45
just archives show you a few cases from this so some of you might remember the storm was a real uh and its effect that was a very strong windstorm that affected Germany in January 2007 and in this representation here and the white shading indicates clouds the colored shading indicates precipitation with the blue collars having very light rather light precipitation and the red colors having a bit um stronger precipitation adjust to orient yourself that the wind basically blows along these these white lines and now at this point in time the storm is located here or Norton Germany and it exposes all the typical features of its tropical storms namely here is a very nice and warm front with precipitation along the front and here we can see a lot of a cold front with a bit of precipitation along the cold front so we
19:42
have usually um precipitation up to 5 and the meters per hour now we can increase the resolution of 2 small what you can see
19:53
is 1st of all we get a lot more detail but then we did also a bit more precipitation here along the cold front but what kind of quantitatively
20:03
the true simulation still look a bit the same now we're trying to disable this connection that this parameterization of the convection and increase the rate wrote the grid spacing of the model to about 2 kilometers I'm working
20:19
can see is that the picture changes quite dramatically so we have physical features going on which you can end identified before for instance here you can see this very narrow bands of a rather intense precipitation so the cold front from rain balance and a few friends look closely it's it's really fascinating but
20:40
so the wind blows here along these white lines and the rain bands are actually not perpendicular to them but they're different than that are oriented in an oblique angle and you can also see that the a bit broken up and if you compare this for instance this radar
20:58
image of a prior which was captured from a number of an extra for tropical cyclone over to Pacific you can see that there actually these models start to look like the observations so if you compare again
21:10
these to this looks like a lot more like the observations OK I switches seasons going to the seminar um this
21:21
the same kind of plot and were looking up at the data in July 2006 so this thunderstorms and rain showers in summer and on the left
21:31
side you can against the the 12 kilometres simulation and on the right side you can see the 2 kilometres simulation where we treat the thunderstorms explicitly by the equations of motion and on the left side all the the rain that you see comes out of these parametrizations so if you for instance not focus a bit on the left side and you focus on the precipitation you can see that it it covers a very wide area has and the that we have mostly greenish which colors appearing yeah and if we watch closely you can see that the takes around noon so now ranchers clock knowledge which to the right side and 1st animation has no and that so we can actually now identify the individual bubbles so we can see the individual thunderstorms and assimilation because they're treated explicitly and some of the thunderstorms they have very small areas of very intense precipitation and this is a figure but a very different physical behavior of the simulations and if you look when there's that the rain keeps you can see that the takes right about now so in the afternoon and if you think back to when do you experience um and the experience thunderstorms and summer it's usually in the late afternoon it's usually in the beginning of the nite so now the timing of this of of the precipitation in the simulations agree is much better so as a summary
23:13
for for the 1st part we want him to switch off the parameterization of the connection because then we can formulate our models are much closer to physical 1st principles this yields an improved representation of summer convection and precipitation extremes this has been very well investigated so here you see cup of papers on that but the downside is that these models are computationally extremely expensive in fact if you
23:44
wanna double the resolution of a climate model you need about a factor 8 in computational effort so to go from detention range to tank midrange we need to somehow find a factor 10 thousand in computational effort and I'd like
24:04
to to show that from what we've tried to this erection so
24:13
computing climate models is extremely expensive on what the way we do with this we use multiple supercomputer so what makes a supercomputer super is the network so we have a bunch of nodes we have a bunch of sockets that are occupied Europe asleep user GP was or whatever but important part is that they're network together with a very strong and fast network and then we can them slice our problem into smaller chunks and distributed the individual have what we call the notes so cells yet the user GPU and this is the the computer I'm using his bold pit
24:51
state it's located in the gone all at the Supercomputing Center there and it consists of about 5 thousand 300 hybrid notes so each note contains an import possible watercourses CPU and in video Tesla GPU and if it is 1 of the larger machines of this world so some might not the top 500 list of countries number 3 and what's more important us but it is also 1 of the greenest computers in the world so if you measure for instance you to so it is is is used it is the user machines are you to gain access to basically right a proposal and that which then gets sent it to review and if the judge like you you get some compute time on it but important part is that the thing is that 90 % to compute prob power of this machine comes from hybrid notes so that contains the Pusan GP use and we think that's the jeep use our chips which are very suited to computer weather and climate models and this is because I'm graphics cards so graphics processing units or GPU accelerators whether we like to call them they're substantially parallel and the throughput oriented but more importantly they provide a much higher memory bandwidth and there I and they're able to highlight some of the memory access latency between a lot of threats and this is important for weather and climate models because in fact many of the operations in these models are limited when memory bandwidth and not by the available flops so no by the available floating point operations just do it quick explain what I mean by memory
26:31
bandwidth we look at 1 of these operations we do in the most expensive and parts of the of the climate models in the dynamic dynamical core and then there are many operations are characterized by the fact that they need information from their neighbors so to update the grid point here the orange grid point we need data from the left and the neighbor on the left and the neighbor on the right for Ford is easy case here and if we write it down you can see here that we have a of i plus 1 and then a of iron than any of I minus 1 and so we have to transfer a lot of data to update single reports on the way to express it is the protracted intensity so it stays how many flops we can do or how many states are opera uh how much data we need to do a flop and the easiest way to assess how much by simply counting so cannot count all the floating point operations here 1 2 3 4 5 so if I flops and then we count all the data accesses which is 1 2 3 4 5 6 7
27:44
OK so it's 7 times a fight and precision so we have about a week to about 0 . 1 Forbes pure transferred by now we wanna normative remember your VOPs limited and for that
27:57
we use a very simple analytical model which was published by the AMS evolved from from Berkeley and it's for insightful because it's essentially says that um attainable bubble upon floating programme point performance can be separated into 2 regimes so 1 is limited by shops and the other 1 is limited by memory bandwidth so if a informatic intensity is above 1 if we do more flops than per byte where this regime here and we're limited by the fact that he has the floating point performance or if you do that if their genetic intensities lower here in the memory bandwidth bound received enough can go back and look at the 0 . 1 false provide and find that we're here on the left side of the curve at this is typical for for both the stencil operations which we find in the dynamical course or the time models now if you increase the memory bandwidth available of course are tied to solution will go down and this is why we think keep usedfor are good for our models so
29:06
we ported our entire code In the entire thing is very well documented here in this paper the by all the and the so called was about 300 thousand lines of Fortran code and the dynamics so that the plot treating equations that show new Barbary Britain's the plus + and we use them and that domainspecific language that that some of us protocol Stella Stella is is nice because it abstracts that the hardware architectures from the operation so we have backends now from multicore court and and and cheap use from Britain could on have and it allows us very low level aggressive performance optimizations whilst so for instance changing the loop order while still maintaining a higher syntax at the same syntax at this higher level of the code the physics reported using compiler directives was personally more involved in this part but the essential thing is that after about 10 person years of work we had the entire time stepping already imported and validated and this now avoids expensive data movements between the 2 GPU each time step so what did we gain from using this new architecture and 1 way to look at it
30:21
is through a a strong scaling experiment so we take as a given experiment the given problem which is in our case just a given domain size and then we throw up an increasing amount of resources at the problem and what we expect is the time to solution to go down proportionally with the number of nodes we throw at the problem so this is what we show here so on the y axis you've time per time step which is a measure for time to solution and on the x axis you have when you go to a order uh you're right we decrease the amount of demand of nodes or we increase the amount of grid points we compute per node so again here reconsideration is here on the on on your right you can see the linear scaling regimes so here um the time to solution decreases about proportionally with their and with the resources you throw at the problem but then down here we see a saturation regime and this can be the most understood I think by by little small for those were interested so now what do we gain so here at the point where where we think the optimal we gain by about about a factor 6 in time to solution or what I even more important for us we gain about a factor 8 in in resources so now we have a very nice and efficient model and the question is now how how far can we push this I've shown you regional simulations of Europe but actually the problem of climate sensitivity so the uncertainty range I showed you at the beginning is a global phenomenon and we wanted to know how far we can get to so we now do the other
32:14
part of the use of socalled weak scaling of the problem and a weak scaling the we mean that we varied the number of nodes now let me begin again we varied the size of a computational domain and the very computed the nodes we throw at the problem proportional so what we expect is the time to solution to be constant and this is not a given but in essence it means that 1st 4 dis experiment here where he where we simulated there um over the Alpine arc we use about 10 nodes for a European
32:54
domain that I showed you before we use about 100 nodes and the question now is will this entire model scale to the planet so
33:04
that would be something around that 5 thousand nodes and now and the question we wanna answer is 1st does it scale 2nd what's the type solutions or are we here on we will establish a baseline of 1 kilometer and finally we want to assess the efficiency of the cosmos so we did this entire
33:27
exercise and we find now we again here have the type at time step on the y axis which is a measure for the time to solution and here we have the number of nodes to the entire size of the machine which is about 5 thousand nodes and we can see that the time to solution actually set that stays constant and that's what we expected due to this to the numerics that they're in the dynamical for so OK we now have a model that can scale at very high resolutions to to global domains and we can do some experiments with for instance we can
34:05
think we can test whether we fast enough to do global simulations so did this and we did an experiment of the idealized version of ways the very as famous as a test case for for climate models by of Austin Williams song from 2006 and we did it can be there we simulate the evolution of American acquire even apparently anyway that's essentially you can think of it as the extratropical cyclones have showed you before and what what you know kind of as autumn storms and because we have a regional modeling on you need to apply some tricks for instance be glued in the West East direction we made the domain periodic so we do take together and we have a small problem at the poles mainly because we have a region model here you can see that the distance in the great it's very very small so we get at pole ultimately we get the singularity and we have to address this so what we do is we simply cut the domain at 80 degrees south Norse because here we do a computational experiment but we nevertheless recover about uh 98 . 4 % of surface this is simply the sign off and so you go of AT & we then use this set up to do a simulation which is about 10 days long and here we come to a adopt a
35:37
solution of off after 10 days so here you can see this very apparently grave after 10 days so we can see the lowpressure system following each other and then in 1 of the low pressure systems we can see here this cold front or here the cold front which gets wrapped around the core of the lowpressure systems and if you look closely you can see these very funny things here these are small entities which are embedded into the mean follows a here we have assumed of of of the small and and and this such a new feature so this case has been run over and over by hundreds of group nonfunded but but a lot of groups but only at odds gridbased things of 20 uh 20 or so kilometers now that we can go to 1 kilometer we actually find new phenomena in very established in both
36:33
cases OK now coming to the solution so when we drew the climate simulations you can do a lot of experiments but to very famous ones through that you do a lot of this is the first one is you do a 30 year long time slice experiment so you can for instance simulate the last 30 years until today and then you can simulate 30 years of the end of the century so for instance at 2070 until 2 thousand 100 and then compared to this is a very famous the type of experiment the other 1 would be kind of centuryscale simulations worrisome light hundreds of years and of course and to do centuryscale simulations you would then to be
37:19
need to be about 10 times faster so to do this 30 year long simulations you need to be able to simulate the few months per day and if you now go to our benchmarks we can see that here is 0 . 2 3 s y of this this simulated years per day so these are a few months and we can reach that today at economic at that and the grid spacing of about 2 kilometers when we scale to the to the for the full machine and that 1 kilometre we need about another factor of 5 to 7 to get to this this window and then of course to the centuryscale simulations we need the another factor that tend to through its entries go simulations at uh 1 kilometer so we have a big challenge to go and of course so uh these simulations come at the cost and we measured uh the power the required so the energy in megawatts per simulation year and then extrapolated from there to the search year on simulation and you can see the friends that 1 cannot to use about 18 gigawatt hours and you can then using the carbon intensity of the Swiss energy Mexican then come that convert into this year to 2 solutions which would appear before 1 action emitted simulation would be about 3 thousand 200 tons of CO 2 equivalent just to compare and this and see full and well footprint of a person is between 0 . 1 and 10 and the tons of C 2 depending where you live on the Earth and of course on your lifestyle so we have a big number here but that we have to bring down a we have a lot of a lot of ways how we tried to do that so we can look at the energy mix we can look at the error tomatic so ordered the algorithms in the model we can look at the math we can look at compilers we can look at at the computer architectures we can look at the chip architectures so there are a lot of questions about what still open and a lot of potential to address these problems but nevertheless we can also say that we can do global simulations are with which to commit resolution so that nice animations I showed you they also apply but what we want 1st we wanted to establish a baseline and we want to know how efficiently or on today's computers and to this end we
39:58
we proposed to use the new metric which is called the memory usage efficiency and that draws from Tulane ideas the 1st idea is that if you're limited by memory bandwidth of course you have to get as close as possible to that upper bound that I showed you in the from from the roofline model so the 1st term in this in this metric is the measure bandwidth so what you measure during execution of your model and then you did the denominator we have the achievable bandwidth and we can get that through a set of highly optimized microbenchmarks the then the other point is that you know the idea is that it may be heard of the expression floating points are free and this comes from the fact that floatingpoint operations on today's architectures they're actually much cheaper than they'd movements that actually about a thousand times cheaper in terms of time and a hundred times in terms of time and about a thousand times in terms of energy that's been measured a few years ago by potentials and so that that the 2nd here is that in order to reduce the energy so this factor 2 thousand here we have to reduce the overall amount of all data transfers and became Cassidy count the number of data transfers we do in our model but then comes the hard part we have to note the lower bound so how many transfers are actually needed and to this end and Torsten and Carlos and Greg developed a poor about performance model is it hard to explain that in in 2 minutes but in essence that what it does is you have an operation as the 1 I showed you before and then you look at what data and we require for that operation and what do you find out and then you much that these data requirements you much to your to your to your architecture so for instance to do the different cash levels and now you then you can compare it to what you want to actually do and the important part is this performs model doesn't only look at 1 individual operation at the time but important is if you do an operation it will have predecessors and with will have followers so they can share data and they can share data and their performance model looks at the kind of at the algorithm or did implementation of the algorithm how it's written on and then gives us a lower bound of that required data transfers so we would obtain that for a model the number itself is a bit of a non telling a informative but we can see that uh we perform quite well so for for the 1st term so it means that the are implementation states the GP architecture quite well what we can do maybe do a bit better with regards to the memory bandwidth and now we hope that other groups will take up this matter and compare so we can learn from each other all right that
43:09
was mostly it down here are my conclusions I hope that that was able to show it to you that we have some very interesting tools from physics to to dress and that we have some very interesting computational science problems to solve if you wanna attacked his big questions such as the climate sensitivity and there is a lot of potential for improvement and there's a lot of interesting science to be done and the hope that maybe some of you would be interested and talk to me or tell me that everything about is wrong and that you could do it all that much more easily and that would be amazing so let's look for your attention and tight we get to view on how you do your work I can understand something what quite jealous of other Europe always using is this 1 from each the conclusion is maybe want 1 moment sorry so if you wanna put stress German that's totally OK with the try and translated into English so everybody can understand so if you want to ask in German go ahead of the blue figure for the excellent work that he did you ever tried to do that and if you get the grid size like it is done in mechanical engineering who made make more precise was that opposable in that case of we're self still do it but we have the number of tossed out what there is a very active research topic in my community so much less in climate but a bit more in a weather prediction because there and you have to be much more even more efficient than we are and climate and what you can for instance to is on let's say you're interested in in tropical cyclones and hurricanes and whenever you detect the cyclone hurricaine you can dynamically and and just uh Europe your great for climate this is a bit less interesting in my personal opinion because of the energy balance of the oceans is mostly governed by the goal level at low level at a very shallow clouds and to resolve those clouds you need the grid spacing around 1 kilometer and below so if you wanna call reduced clouds you need that extremely high resolution everywhere anyway so maybe we should focus on that 1st before we going through these these refinements thank you can have a collision when there are no question the most of any sort of you know to these it what do not serving the clouds help with describing them assimilating them flying there with aircraft series radius or something like that but yeah we do that a lot of you personally but there are a lot of information it contains the people flying through clouds with aircraft people trying to model each of the processes or how cloud drops for how they are all how they like each other and there are some things actually decades we do we do observations can pains so 1 of the most interesting ones that will happen it's media right now or operate in a few months is that novel tools campaign and is involved is also that the Max Planck Institute in Humbert and they will do some very interesting coupling of them they will to some very interesting new experiments but at the same time they will also apply this high resolution 1 kilometer or below all models at the same time and try to to to leverage the knowledge gained from both of these fields so yeah we do it and of course we also use satellites so there's a lot of observation of clouds but the problem is extremely hard to mn the number 7 so I'm from a somewhat difficult for different field I used to do some medical imaging soften gpgpu and was rather surprised when you said that the memory bandwidth was the most decisive thing about your usage of such a GP use which is really unusual because usually use that for for greater computational density provides and so is that most about of having access to cheap really fast memory at a large scale whether we like it sends to to develop a 6 that this application specific and halfway up for for your kind of computations and climate modeling and well the application well why we use them really depends on the characteristics of replication and you remember like you remember that the refined model right and you can see that the increase in in in kind of the difference between the 2 lines was about as big for that member increase memory bandwidth than the increase in peak performance but it's a very different view you have of course because I guess you do a lot of matrix
48:26
matrix multiplies and these operations but they're actually useful for both types of problems now I mean why we why they were chosen I don't know how well I know what I was the person that make decision this was was this yes yes and they did a very thorough analysis and but 1 reason is that to build supercomputers you usually can leverage what is there in the consumer market right and you can you can essentially leverage what what what is already out there and try to put it together in a new way a rolling Eurochip as in the that there's a talk about that is very expensive um but nevertheless there have been some attempts using for instance FPGA still trying to hold design colds and chips and this is something that we will certainly have to look at in the future but it has not been done yet operationally so that people are just talking is lot for your Adolf from converted to some very interesting papers so that we have a question the sum of the number to be police thanks for the interesting told them currently clouds solar radiative transfers modeling 2 would be in you also plan to experiment with 3 D radiative transfer like the sensory motor grew smaller something similar um the right now that our our review frontal cortices that delta stream approach of only it is the but the question on the processes your interested in right if you're interested in a very hard time Lehigh results radiative transfer the question of course want to go there but radiative transfer is the all the expensive part in a model so it's in 1 case it's the dynamics and then that the order of that the next 1 is radiative transfer and actually today we don't run radiative transfer every time step but we are currently running in 2 money to model about every quarter of an hour and we think is just enough not to affect the results of our simulations too much so 1st we have to to I would like to increase the that the frequency we call that molecule before going into 2 2 to 3 D radiative transfer but the up it's a trade and currently we think we're it's OK stop but if you wanna go to 1 meter yes there is a lot of work for instance using also um raytracing his ideas the there's a lot of work going on in the community that is less than is I question on on the Web yes yes string of the Fortran code you were talking about has been around for quite a few years what about the realization coach the architecture changes OK and government of the the soul if you basically you have a number of ways you can approach this sporting problem right and and this depends a lot on the politics going on in your community what this this community trying to be and where does it try to go so for instance if there is there is 1 idea that's been in order to support large called lot scientific codes to different architectures you simply you have kind of the master branch of your code and you try to keep that scheme as nice as well tested as possible and then whenever a new architectures comes around you simply take that code branch and off you let a shit load of PhD students on it that supported and then you have a new a new quotes the other approaches what what we were trying to do so so call someone is that what we call a community or a consortium also it's about them I think about 15 weather services around the world I want to use it it's about more 160 universities and I don't know how many individuals out from the from the top of my head so they all have different machines they all have different architectures and uh be chosen way which we were we try or all data is the mastermind and the the way that we have we tried to contain the single talks at the high level so we don't we wanna make a minimal amount of changes between architectures but this is why I had the use this this stellar they developed as the library in order to make um lowlevel optimizations to decode and that's also a job that can be used compiler directives which we're supposed we're supposed to be able to not change the code that much in practice this is not possible for every small detail but for a large part of the code so there is 2 approaches and just tell you the interested about
53:45
it here is a list of literature and the think of this paper describes it in detail example drink that have become of
53:56
great still have some time number 8 Lisa in the back story guns you really and thank you for the talk and you stated that most of your calculations were for climate models that do you think it feasible to apply these to operational shot time where the models OK some to develop that keep your version uh was actually a lot of it is driven by Swiss which is that the Swiss um weather service turns on a lot of the developers were paid by MIT Swiss and I the initiative I think a bit of it was also born there because today they run this code that I showed you the baroclinic from there and that operationally that do have I think around 32 kilometres simulations and 1 1 kilometres a simulation operationally data and currently the code is we integrate into the the master branch and I'm I know that several other um weather services 1 to pick it up because for Dan and it's much less about the large remains before dam it's about the energy use so you you use by day by switching or architecture you save a lot of energy and if you have to hate the energy bill yourself that investment into the engineers will be compensated quite fast the number 4 in the bank thank you know some have 4 questions from wasn't voice yes firstly on gives the 1st um the time to resolution seems to be very high the practical limit in that and secondly what still some main time between failures for machine like the like like the ones you love into however is that question so what's the minimum resolution that you expect to be useful would be a qualitative plateau eventually in falsely in this little difference as to th how how well did the competition between models and the parametrization how much does politics play a role in the meaning in the sense of international politics into competition between different of climate models coming from different nation states OK all answered the 1st results from the loss wanted to take a long time before we can talk talk about that in between so all the remains the minimum timetosolution sorry this the latency off an architecture salt and that that that the latency of the memory access so that will be there will be some physical limits on how fast you can uh access a memory so that's why we think it will be very hard to go to to 10 or 20 simulation years the data because today we apparently can't see in architecture that the latency is short enough but the 2nd question what was that can help me again on the line of the ideas I was a meantimebetweenfailures for supercomputers button I don't know that number from the top of my head from my experience from doing these simulations so I would say each month so we did this monster chunks and they would say it was about 10 % and it was mostly a disk access that failed metals with has to computers so whenever 1 failed states which are already entire thing to the mother 1 so that code systems I don't know in % but I think there are at least they're very robust and that the question was again there is so and so nervous I can't remember annotated little in resolution II and so the ultimate resolution that we wanna go through 2 is 10 to the minus 2 meters so 0 . 1 centimeter because that's the that's that that's the kind of uh that the the inertial subrange of turbulence as seen this counting here and the meaning you have a question that number 5 so cool intervention think you've loyal thank you for the talk in the field on her and that she had the hair the hair and all that I think you for the book so my my question is on the on the grid size so with those smaller grid size you'll have to decrease the time stepping as will the soul and you using the weather the model as far as I am understand to to climate modeling so you know your problem is that the times that we this getting small and your new numerical arose UK and in fact if you do that with with the to use your
59:12
your the results will be some kind of full of numerical errors so why not do a very good the simulation for 1 year to a parametrization and use that parametrization fault that climate models again and to adaptive groups blob blah there a lot of pressure and do we have enough time going on isn't going to another unit monitors all rules all at all so in in and model schemes of explicitly plus because schemes and in the explicit scenes yes you have to decrease your time step with the um with the grid spacing what the cool thing is that the methods are only local so the the perfect linear scaling of the showed you is because the methods only involve local communication and many implicit methods have global communication so if you think of it in terms of the energy question again the energy consumption is a lot higher than coming to the numerical error of course the numerical error is given by the grid spacing and a bit less by the small time steps because 1st of all but we have a lot of forces we have the son which is a very strong forcing me continents that force we have an mountains that impose forcing we have a very large dynamical systems that impose forcing so did the forcing um when you compare the solar forcing with the error before single dominant than the other thing is that we have something called a chaotic system so the Lorentz papers if you if you if you know what I'm talking about so very small errors in the calls of very large differences in the simulation after a couple of days and we have that by now we have some methods how to address the scales problem between it and that the solution is we do a lot of simulation and perturbed and a bit so in the end they will lead to will have a climate which is in the mean very similar but the initial state at the state at a certain time steps that is very different from each other so this is a lot for bite but in polymer and his group for interested I can recommend some papers about what you think of Romans which is half the length of the the the what what you mean by the
1:01:59
the the sum of all the and that it took place and is to put the