We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Reliability of regional climate model trends

00:00

Formal Metadata

Title
Reliability of regional climate model trends
Title of Series
Number of Parts
28
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
A necessary condition for a good probabilistic forecast is that the forecast system is shown to be reliable: forecast probabilities should equal observed probabilities verified over a large number of cases. As climate change trends are now emerging from the natural variability, we can apply this concept to climate predictions and compute the reliability of simulated local and regional temperature and precipitation trends (1950–2011) in a recent multi-model ensemble of climate model simulations prepared for the Intergovernmental Panel on Climate Change (IPCC) fifth assessment report (AR5). With only a single verification time, the verification is over the spatial dimension. The local temperature trends appear to be reliable. However, when the global mean climate response is factored out, the ensemble is overconfident: the observed trend is outside the range of modelled trends in many more regions than would be expected by the model estimate of natural variability and model spread. Precipitation trends are overconfident for all trend definitions. This implies that for near-term local climate forecasts the CMIP5 ensemble cannot simply be used as a reliable probabilistic forecast.
Electric power distributionClimateModel buildingVideoComputer animation
ClimateModel buildingComputer animation
WeatherWeather forecastingClimate changeModel buildingPaperClimate modelMusical developmentMeeting/Interview
Weather forecastingRainComputer animation
WeatherWeather forecastingDayClimate modelRainAngeregter ZustandMeeting/Interview
TemperatureWeightScale (map)Veränderlicher SternWinterMetalDiagram
Ground (electricity)Model buildingSpread spectrumClimateTemperatureVeränderlicher SternWeatherRemotely operated underwater vehicleSpread spectrumModel buildingGlobal warmingClimate modelWeather forecastingSEEDComputer animation
Climate changeModel buildingPattern (sewing)Oceanic climateDesertionGlobal warmingAsbestosMorningMusical ensemble
TemperatureRankingRubber stampClimateFunksenderCell (biology)RankingPattern (sewing)Musical ensembleScale (map)VideoModel buildingDiagram
Transcript: English(auto-generated)
This paper is about reliability of climate models. The idea is very simple. The climate change is now so strong locally that we can start trying to verify the models. And the weather people do that all the time. For instance, if you see here, this is the weather forecast for tomorrow.
It says 60% chance of rain. And the weather people have checked that if you take a lot of days with a forecast of 60% chance of rain, then on 60% of those days it actually rained and on 40% it stayed dry. We tried to do the same with climate model trends. So here's the temperature of the Netherlands, corrected for changes in places where the temperature was observed
and the weight was observed and everything. And you see upward trend to the 1940s, a little bit downwards to the 1970s, and then steeply up to about now. And this looks very similar to the global mean temperature. The global mean temperature has less variability, but it does also show upward trend to the 1940s,
downwards a bit to the 1970s, and upward to now. So an easy way to describe the temperature in the Netherlands is to say that it went up proportional to the global mean temperature, but about two times faster if you look at the scale here. The regression of this on this is about two. So we can make an Instagram of the trends in the climate models over here.
These blue ones are the climate models, the purple ones are the observations. And you see the observations show two times faster trend than the global mean one. And the models are between half and about two with a big spread. And the big spread is for two reasons. First of all, the natural variability. Of course, there's weather.
That's why the curve in the Netherlands is so much more noisy than the global mean temperature. And the second one is model spread. Not all climate models are the same. Some of them predict more warming in the Netherlands and some of them predict less warming in the Netherlands. So they also give a range of uncertainty. And the question we want to answer is just like in the weather forecast,
if it says here that it's 80% of the model's range, does this actually correspond to 80%? And you cannot do that for a single point. We have to look at the whole world. We start in 1950 because from 1950 onwards we trust the observations more. And you see the pattern of climate change here. There's more warming in the Arctic, northern Canada, Siberia.
There's more warming in the deserts, the Sahara deserts. The deserts in the Western US, Australia. And there's less warming in the oceans, especially the Pacific Ocean, the North Atlantic Ocean, the Southern Ocean. Here you see the model warming. And you see basically the same pattern. The models are not doing so badly.
You see the advanced warming in the Arctic. You see the deserts getting warmer. You see the oceans getting less warm. But the details are not the same. If you look here at the percentage of the local warming in the ensemble, you see that the models underestimate the warming in Asia, here in the West Pacific, in the East and Indian Ocean.
And they overestimated the warming trend in the Pacific Ocean and off the coast of the US. And the question is, is this due to chance or is this due to problems with the models or with the way the models are run? So if we gather all this information together, we make something called a rank histogram. And that should be a flat line.
5% of the map here should be outside the lower bound of the ensemble. 5% should be in the next 5%, et cetera, et cetera. And we see that instead we have more than 10% that is blue, and actually close to 20% that is outside the ensemble at the other side, when we only expect 5%. So our conclusion is that the large scale patterns are very similar,
but on the small scales, the regional scales, where many people want to have climate forecast, the ensemble is overconfident. There are more regions where the observed trend and the model trends do not agree than you would expect by pure chance.