On Mora et al.'s Reply

Yesterday saw the publication of our Comment on Mora et al., along with Mora et al.’s Reply and an associated ‘News & Views’ piece. Although the Editors deserve credit for commissioning a News & Views piece on this exchange – a first for a Comment in Nature – there are still errors in Mora et al.’s Reply. A previous post summarised the issues with the original paper, and Doug McNeall also discusses the main issues.

Firstly, the disagreement with Mora et al. is essentially about how to quantify the best estimate and uncertainty of climate emergence. That the signal of climate change will emerge in the tropics first, and has already been seen to emerge in some places, is well known, robust and not under discussion. These changes will be in regions of high biodiversity.

However, claims to be able to predict the precise date of climate emergence for individual cities to within a couple of years (as Mora et al. do) are wrong and misleading to decision makers. The aim of our Comment was to point out how such studies might be improved in future to avoid this overconfidence – interpreting an ‘ensemble of opportunity’ such as CMIP5 is not straightfoward. Mora et al. simply do not acknowledge the substantial uncertainty associated with future climate projections.

Before going through Mora’s Reply line by line, we highlight the main error:

Hawkins et al. further suggest that the standard error is the wrong choice as “the future evolution of climate will not behave like the mean, but as a single realization from a range of outcomes”. In other words, all models’ simulations are equally likely, and thus, statistics that describe the broad range of projections are more suitable. This premise, however, conflicts with findings that it is the multi-model average that best approximates mean observed conditions, often better than any individual model, as demonstrated by prior studies4 and confirmed in our paper.

This one paragraph neatly demonstrates Mora et al.’s key misunderstanding and the Reviewers of the Reply made it very clear to Nature that these statements were incorrect.

But, why is this paragraph wrong?

Mora et al. claim that because the multi-model mean best represents the mean observed climate state, then it is only the uncertainty in the multi-model mean itself that matters for future climate trends. Mora et al. are still advocating the use of the standard error as an appropriate measure of uncertainty for future climate.

1) The real world only has a single realisation. The uncertainty in emergence time in the real world cannot possibly go down because we produce more simulations, which is what Mora et al. are effectively claiming. Otherwise, with an infinite ensemble Mora et al. would claim there was zero uncertainty in emergence date – clearly implausible.

2) The range of climate models used have different process descriptions, all of which may be currently regarded as possible, yet could give major inter-model differences in the future. Why? Partly because the processes that govern the mean state are not necessarily the same as those that govern trends. For example, the temperature through the next century largely depends on the strength of the climate feedbacks (e.g. cloud feedbacks, carbon cycle feedbacks etc) which are not really measured in the mean state. There may also be compensating errors in the representation of the processes which cancel out in the mean state, which would produce a biased projection for the future. Finally, there is evidence that the spread of the models may not be large enough when examining historical trends.

3) The GCMs may all be missing or inadequately representing some key ingredient which produces different strength feedbacks. In this case, the average of the models will be biased.

4) The models are not independent. Many have very similar component models put together in slightly different ways. For example the NEMO ocean model is used in several GCMs, as is the ECHAM atmosphere model & CICE sea-ice model etc. One recent study suggested that there were effectively around 8 independent GCMs. This aspect alone would double Mora et al.’s quoted uncertainties.

Furthermore, the citation to Tebaldi & Knutti (2007, ref. 4) is unjustified as that paper does not support Mora et al.’s contention. The summary of that paper even describes the need to avoid ‘over-optimistic reliance on consensus [multi-model mean] estimates’. This is also highlighted by Claudia Tebaldi in a Carbon Brief blog post: “Their use of Tebaldi and Knutti (2007) to support their projections … is laughable, since our paper actually presents arguments against it”. Annan & Hargreaves (2011) discussed this issue too.

The effects of this misconception appear throughout Mora et al.’s Reply:

In the accompanying Comment, Hawkins et al. suggest that our index of the projected timing of climate departure from recent variability is biased to occur too early and is given with overestimated confidence. We contest their assertions and maintain that our findings are conservative and remain unaltered in light of their analysis.

In the rest of the Reply, Mora et al. present no analysis or evidence which shows that our Comment is wrong. In many cases we demonstrate that the findings are indeed altered – for example, we show that results in the biodiverse regions of the Amazon and Southern Ocean are substantially affected by the analysis, but Mora et al. simply ignore this.

We presented an index that quantifies the year after which the climate continuously exceeds the bounds of historical variability, using 39 CMIP5 Earth System Models. Uncertainty in climate projections from these models arises chiefly from natural (‘internal’) climate variability, model error and uncertainty in the future evolution of greenhouse gas concentrations3.

Not much argument with the text itself, but Mora et al. cite Deser et al. (2012) as Ref. 3 to back up their statement. However, Deser et al. uses a single model and a single forcing pathway to examine the variability contribution to uncertainty – nowhere does it therefore discuss model uncertainty or scenario uncertainty as Mora claims. Hawkins & Sutton (2009, BAMS) would be a better reference.

Hawkins et al. suggest that by “ignoring the irreducible limits imposed by [] random fluctuations, Mora et al. express their emergence dates with too much certainty”. However, our index was calculated independently for individual model simulations, which include internal variability. By considering individual simulations (internal variability) from each of 39 models (model-to-model error), under two emission pathways (scenario uncertainty), our results account for the three major sources of uncertainty in climate projections.

If Mora et al. understood the findings of Deser et al. (2012), which they cite in the sentence before, then they would realise that by simply using a single simulation from each model does not properly examine the uncertainty due to different realisations of variability. This will be a recurring theme.

Our analysis of climate departure included only projections to the year 2100 due to their greater availability (only a third of the CMIP5 models have projections beyond 2100). Hawkins et al. assert that the use of model projections to the year 2100 reduces the global mean timing of climate departure because we assigned the year 2100 to cells where unprecedented climates might occur after 2100. This is a valid constraint, and thus, climate departure at 2100 in our results should be interpreted as emergence that will occur in 2100 or later, or not at all.

Some progress. Mora et al. now acknowledge that they artificially assigned an emergence date of 2100 to non-emerged grid cells (not mentioned in the original paper) and that this is a valid constraint on their results. However, Mora et al. have ignored that our analysis shows that this choice actually affects emergence dates before 2100 because of the possibility of a cool year in the simulations after 2100. The divergence between the solid and dashed lines in Fig. 1a of our Comment clearly demonstrates this. Their statement should read that departure dates after around 2050 are not necessarily robust because the data is limited to end in 2100.

The implications of using projections to 2100 are, however, exaggerated by Hawkins et al. First, we recalculated our index using the multi-model median, which is less affected than the mean by outlier projections of climate departure after 2100, and found only small differences. The global median temperature departure was 2076 under RCP4.5 (the reported global mean was 2069) and 2045 under RCP8.5 (the reported global mean was 2047). The multi-model median delivers similar results to the mean, even if projections to 2300 are used.

Some misleading statements here because Mora et al. are obsessed with the central estimate, whilst ignoring the rest of the distribution, as highlighted in their key error above. In this paragraph, Mora et al. recalculate their results using the median rather than the mean (as our Comment suggested). It makes small differences using data up to 2100, as we also highlighted – but this would not have been known a priori. It may also make relatively small differences to the multi-model median using data up to 2300, as we also highlighted. BUT, the inter-model spread in the median when using data up to 2300 is far, far larger, as shown in our Fig. 1a. Mora et al. again ignore this in their Reply and claim that we are exaggerating.

For instance, the analysis in ref. 1 shows that 7 out of 13 models exhibit similar or even earlier global median temperature departures using projections to 2300 compared to global mean based on projections to 2100 (upper plot in figure 1a in ref. 1).

Yes, for 7 of 13 models using the median and data up to 2300, the global emergence metric is similar to using the mean and data up to 2100. However, Mora et al. neglect to mention the other 6 models which show substantially later emergence. For example, the MPI-ESM-LR model emerges around 2250, instead of around 2080, and four other models emerge post-2100 instead of pre-2080. Again, the uncertainty is being ignored.

Second, Hawkins et al. chose RCP4.5, stating that the limitation is “less pronounced for annual temperatures in higher forcing pathways”. Indeed, by 2080, 97% of the planet will face temperature departure for the remainder of the twenty-first century under RCP8.5. Under the RCP4.5 pathway, 67% of the planet will face temperature departure by 2080, highlighting the imminent departure of Earth’s climate even under an optimistic mitigation scenario.

Some progress. Mora et al. now say that by 2080 (which is ‘imminent’, apparently), the planet will face emergence for just the following 20 years. Slightly different to how it was characterised in the original paper. But, again, where is the uncertainty? Our Fig. 1a highlights that by 2080, the emergence fraction might be 35% or 85% depending on the model chosen, and that is when restricting the data to end in 2100. When using the longer simulations the emergence fraction reduces to 25%-80%. Mora et al. never discuss this uncertainty. As an aside, the word ‘optimistic’ for RCP4.5 is a value judgement and should not be in a scientific paper.

Finally, from a global biological and social perspective, the potential limitation associated with climate departures beyond 2100 is small, as it is relevant only to high latitudes and not for areas where the majority of people and species on the planet live.

Again, Mora et al. ignore that their emergence dates pre-2100 are affected by their analysis shortcomings, particularly in the Amazon and Southern Ocean, which are included in their key biodiversity regions. It is not only the high latitudes which experience a post-2100 emergence as Mora et al claim.

Any statistical value should be interpreted based on the metric it represents. Hawkins et al. claim that by reporting the standard error of the mean our results are given with too much confidence. Our paper is transparent and clearly states that the standard error of the mean was our metric of uncertainty among models, and although the values provided should be interpreted in the context of that metric, they can easily be converted to another choice of statistic if so desired (for example, our standard error can be multiplied by √N to obtain the standard deviation).

Mora et al. are correct – the standard error can be converted to the standard deviation easily. The issue here is in the communication of the results. It can almost be guaranteed, looking back in 100 years time, that the vast majority of locations will not have experienced emergence within the bounds of uncertainty given by Mora et al. They are communicating extreme precision to the public and to policymakers about climate emergence with no justification – for example, ±2 years for emergence for many tropical cities. Mora et al. are effectively arguing that the IPCC, for example, provide uncertainties that are at least an order of magnitude too large.

Although there is no established ‘correct’ way to express uncertainty, at least for the results from Earth System Models that can be verified, metrics of variability around the consensus mean are more appropriate.

The first sentence I can agree with! Yes, metrics of variability around the consensus mean are more appropriate, but Mora et al. have not actually used one. And, there is no ‘correct’ way to express the uncertainty, but there are certainly ‘incorrect’ ways, and Mora et al. have chosen one, as the Reviewers pointed out.

Hawkins et al. also suggested that the standard deviation cannot be scaled to their suggested 16–84% range multi-model dispersion as the climate departures are not normally distributed. However, “contrary to popular misconception, the standard deviation is a valid measure of variability regardless of the distribution. About 95% of observations on any distribution usually fall within 2 standard deviation limits…”5. We recalculated multi-model uncertainty as the standard deviation and as the 16–84% range among model projections for temperature and found small differences. The global median multi-model uncertainty estimates by these two metrics differ by 2.5 years under RCP4.5 and 1.6 years under RCP8.5.

Mora et al. are arguing that because it only makes a small difference *after checking*, then there is no need to use the more optimal statistic? Also, note ‘usually’ in their quoted citation. The distributions that Mora et al. are examining are strongly skewed as they are ‘right-censored’ (they cannot have values above 2100). Using the standard deviation for this type of distribution is unwise. Note that the quoted values for the uncertainty estimates again ignore the post-2100 results – our Fig. 1a shows that it is the combination of errors (both truncation at 2100 and choice of mean) that helps cause the overconfidence in emergence dates.

Our paper used all (not a subset) of the latest generation of Earth System Models that have complete projections for RCP4.5, RCP8.5 and historical experiments, under very conservative criteria for estimating climate departure (for example, using the minimum and maximum historical values to set bounds, defining climate departure as the year after which all subsequent years are out of historical bounds, and using a historical period already affected by human greenhouse gas emissions; we demonstrated that all these criteria delay the estimated year of climate departure). We also used data on species distributions, protected areas and socio-economic conditions to show that the earliest emergence of unprecedented climates will occur in areas with the greatest number of species on Earth, where a large proportion of the world’s human population lives and where conservation and economic capacity to adapt are limited.

We agree that Mora et al. chose a conservative metric for emergence – but they made that choice. They could have chosen a more robust signal-to-noise metric, like the IPCC AR5 and many of the other papers on this topic. A further guest post demonstrates separate issues with the GDP and population results of Mora et al.

These conclusions remain unaltered.

Does anyone now agree with this statement?

About Ed Hawkins

Climate scientist in the National Centre for Atmospheric Science (NCAS) at the University of Reading. IPCC AR5 Contributing Author. Can be found on twitter too: @ed_hawkins

17 thoughts on “On Mora et al.'s Reply

  1. Ed,
    Thanks for this explanation, convincing to this reader at least. Mora’s “Our paper is transparent and clearly states that the standard error of the mean was our metric of uncertainty among models” doesn’t provide a reason to consider that as a more appropriate metric than (say) the quartile or 16-84% ranges of the samples.

    In passing, I’ll mention that I particularly appreciate the citation of the Pennell & Reichler paper quantifying the structural overlap among models, which I hadn’t read before. It’s not central to your argument here, but this is useful in other contexts.

  2. That first Mora et al. quote is a beauty. It sounds so convincing and is so wrong.

    The part that may be worth more consideration is that all models are typically given equal weight currently. With more and more groups plugging some sort of climate model together to be able to participate in CMIP, maybe we really need to start thinking about weighting the results or even selection.

    That may make the model spread smaller and would make it more important to communicate that the model spread underestimates the uncertainty, but that is okay and may stimulate more work on good uncertainty estimates.

    1. Hi Victor,
      Yes – the weighting issue is interesting. There is some evidence that weighting models by past performance does not reduce uncertainty or improve skill, but I can’t find the right papers at the moment. The IPCC do select models for the Arctic for the assessment of when ice free conditions might arrive, but that is the only one where that is done.

      1. Yes, weighing is difficult. Has anyone tried to weigh models by the number of articles that stand behind them? Could be a proxy for the amount of work and validation that has been done. (No idea whether I should add a smiley or not.)

    2. I care more about model agreement with observations than about model spread. Wrong physics in CAM 5 model (an assumption that a latent heat of water vaporization is independent of temperature) has not been corrected in 3 years.

  3. Hi Jim – yes, I read the N&V. I have not seen that term used before. It may have been to appear more independent in this discussion, i.e. not using departure (Mora’s term) or emergence (our term)?

    Apart from that, Power slightly doesn’t quite get across the psuedo-emergence effect I feel. It is all years after 2050 that are potentially suspect, not just the exactly 2100 ones. This is shown by the divergence between the solid and dashed lines in our Fig. 1a in the comment, and by the London example in my previous post.


    1. Thanks Ed. I just did a brief Google, and Google Scholar, search, and a look at the entries in the Encyclopedia of World Climatology–nothing whatsoever. Looks to me like he just invented a new term.

  4. However, “contrary to popular misconception, the standard deviation is a valid measure of variability regardless of the distribution. About 95% of observations on any distribution usually fall within 2 standard deviation limits…”⁵.

    5.  Altman, D. G. & Bland, J. M. Standard deviations and standard errors. Br. Med. J. 331, 903 (2005)

    The claim by Altman & Bland about “95% of observations” is inaccurate and misleading. The term “usually” is disputable, and in any case, implies that we cannot apply the claim to an arbitrary situation. Strictly, all we can say is that “75% of observations are within 2 standard deviations”: by Chebyshev’s inequality. This was noted in one of the Responses to Altman & Bland.

  5. from e360:

    Nature commissioned an independent climate scientist — Scott B. Power of Australia’s Bureau of Meteorology — to assess the arguments of the two sides. Power said that while the 13 scientists presented a more appropriate estimate of when the planet could enter a new climatic state, “important conclusions of Mora and co-workers’ original paper remain valid.”

    1. Indeed. The conclusion that climate emergence will first occur in the tropics is robust but already well known. These regions have high biodiversity. The rest of the results are quantitatively wrong and expressed with too much confidence anyway. Getting the right overall conclusion when doing an incorrect analysis does not make a good paper.

      Given that you are writing from HIMB, U. Hawaii, I guess you are a Mora et al. author? Do you disagree with my comments on your Reply? Are any of my statements in the post above incorrect? Do you accept that errors were made in the analysis and Reply? I would like a productive discussion so we can move the debate forwards.


Leave a Reply

Your email address will not be published. Required fields are marked *