Sources of uncertainty in CMIP5 projections

The recent IPCC AR5 includes a discussion on the sources of uncertainty in climate projections (Fig. 11.8, section, which updates previous analyses using CMIP3 (temperature, precipitation) to the latest CMIP5 simulations. The dominant source of uncertainty depends on lead time, variable and spatial scale.

There are three main sources of uncertainty in projections of climate: that due to future emissions (scenario uncertainty, green), due to internal climate variability (orange), and due to inter-model differences (blue). Internal variability is roughly constant through time, and the other uncertainties grow with time, but at different rates. Although there is no perfect way to cleanly separate these uncertainties, different methods have given similar results.

Overall, the conclusions from CMIP5 are not much changed from CMIP3. For global temperature, the spread between RCP scenarios is the dominant source of uncertainty at the end of the century, but internal variability and inter-model uncertainty are more important for the near-term. For the next decade or so, internal variability is the dominant source of uncertainty. A small caveat to this is the role of anthropogenic aerosols, which are assumed to decline quite rapidly in all RCPs in the next 20 years, and so this scenario uncertainty may be smaller than it should be.

For global temperature, the figures below show two different representations of this information, either as a ‘plume’ (Fig. 1) or as a fraction of the total variance (Fig. 2).

For other variables and on regional spatial scales, the picture can be very different. For example, for European winter temperatures, the internal variability component is more important (Fig. 2). And, for European winter precipitation, scenario uncertainty is almost irrelevant because the internal variability and inter-model differences are relatively much larger (Fig. 3). In fact, for precipitation in all regions, the RCP scenario uncertainty is relatively small when compared to the other sources of uncertainty.

The key messages are that resolving inter-model differences could reduce uncertainty significantly, but there is still a large irreducible uncertainty due to climate variability in the near-term and, particularly for temperature, future emissions scenarios in the long-term.

Figure 1: The sources of uncertainty in global decadal temperature projections, expressed as a ‘plume’ with the relative contribution to the total uncertainty coloured appropriately. The shaded regions represent 90% confidence intervals.

Figure 2: Sources of uncertainty in global decadal (top) and European decadal DJF (bottom) temperature projections, expressed as a fraction of the total variance.

Figure 3: Sources of uncertainty in East Asia decadal JJA (top) and European decadal DJF (bottom) precipitation projections, expressed as a fraction of the total variance.

About Ed Hawkins

Climate scientist in the National Centre for Atmospheric Science (NCAS) at the University of Reading. IPCC AR5 Contributing Author. Can be found on twitter too: @ed_hawkins

13 thoughts on “Sources of uncertainty in CMIP5 projections

  1. Ed,

    1. Does anyone ever re-run their climate models when the accuracy of data sets is questioned? Or eliminate data from inaccurate monitoring stations and re-run the model?

    New study shows half of the global warming in the USA is artificial
    Compliant thermometers say +0.155C/decade
    Non-compliant thermometer say +0.248C/decade (60% ERROR)
    NOAA final adjusted data says +0.309C/decade (99.35% ERROR)

    2. Please clarify the difference between “margin of error” and “confidence intervals.”
    I never see “margin of error” stated in papers, nor the IPCC AR5 report. I can say with 90% confidence that the 2014 hurricane season will have between 0 and 100 hurricanes. Absolutely no value in that 90% confidence, but sure looks like a huge margin of error in the prediction.


  2. Ed what is the width of the internal variability band in figure 1?

    By eye it looks to be about 0.25K. If it’s that number and we relate that to observations does it mean that at any one time the temperature can ‘wander’ upto 0.125k above or below the temperature ‘prescribed’ by nett external forcing? (caveat there is some uncertainty when it comes to forcings)

    A follow on question is that these uncertainties are calculated using annual data, is that right? If you repeated the exercise with decadal or more (say 17 years) then would the internal variability uncertainty reduce or increase? Presumably if your a person that believes much of the internal variability is produced annually by ENSO then for longer periods this uncertainty would reduce as within a ten (or 17) year period any wandering by a strong El Nino or La Nina would be ‘smoothed’ by neutral and opposite years. If you believe in multidecadal ‘oscillation’ then presumably wanderings within a decadal period would tend to be in the same direction and have …….. i don’t know what effect …… on internal variability uncertainty.

    1. Thanks HR,
      Yes – the orange band represents how much the temperatures could ‘wander’ if we knew the scenario and model response perfectly. These graphs are for running decadal means – so for annual data the orange band would be much larger.

      There are some visual examples of the role of variability here and here, especially on regional scales.


  3. Hi

    Thanks for a nice post. You state that “The key messages are that resolving inter-model differences could reduce uncertainty significantly”. There are already articles suggesting the lack of spread among climate models is a problem. My opinion is that for processes which are uncertain, diversity is needed to reflect the true confidence intervals of the projections. If you mean resolving inter-model differences through reducing the uncertainties of the physical and chemical processes represented in the climate models, I agree.

      1. Ed you have mentioned in the past that the very hottest models may be inconsistent with observation. I think you’re an author on a paper (Stott et al??) that argues this very point.

        Can you see no argument for dropping the hottest models from the ensemble?

  4. I just noticed your masthead is based on the same type of ‘plume’ graph as Fig1 here. I notice the thick black obs lines are in quite different places relative to the gray historical model spread. So out of curiosity what causes the big difference?

Leave a Reply

Your email address will not be published. Required fields are marked *