Making sense of the early-2000s warming slowdown

It has been claimed that the early-2000s global warming ‘slowdown’ or ‘hiatus’, characterized by a reduced rate of global surface warming, has been overstated, lacks sound scientific basis, or is unsupported by observations. The evidence presented in a new commentary in Nature Climate Change by Fyfe et al. contradicts these claims.

The new Fyfe et al. paper is mainly in response to Karl et al. and Lewandowsky et al., who made the following statements in their abstracts:

“These results do not support the notion of a ‘slowdown’ in the increase of global surface temperature” – Karl et al., 2015, Science

“there is no evidence that identifies the recent period as unique or particularly unusual” – Lewandowsky et al., 2016, BAMS

Firstly, climate scientists agree that global warming has not ‘stopped’ – global surface temperatures and ocean heat content have continued to increase, sea levels are still rising, and the planet is retaining ~0.5 days of the sun’s incoming energy per year.

I think there is also broad agreement that climate scientists have probably not chosen the right words (e.g. ‘hiatus’) to describe the temporary slowdown, especially when talking to the media and the public.

However, there has very clearly been a change in the rate of global surface warming. Figure 1 shows rolling 15-, 30- and 50-year trends computed for different surface and satellite global temperature datasets. There are clear fluctuations in the rate of global temperature change in the past. We also expect similar fluctuations in future – global temperatures will not increase smoothly or linearly.

Just focusing on the observations, the most recent observed 15-year trends are all positive, but lower than most previous similar trends in the past few decades. This is a clear demonstration that the rate of change has slowed since its peak.

Figure 1: a-c, Overlapping trends in global mean surface temperature (GMST) in three updated observational datasets. d, Ensemble mean GMST from 124 simulations from 41 CMIP-5 models using RCP4.5 extensions from 2005. The shading in a–e is plus to minus one standard deviation of the 15 year overlapping trends from the CMIP-5 simulations. e, Overlapping trend in so-called pacemaker experiments where a CMIP-5 climate model was forced with observed eastern tropical Pacific sea surface temperature variability and RCP4.5 extensions from 2005. f, Overlapping trend in the temperature of the lower troposphere (TLT), spatially averaged over the near-global (82.5°N, 70°S) coverage of two satellite-based datasets; model results are from 41 simulations of historical climate change performed with 28 CMIP-5 models, with RCP8.5 extensions from 2005. Peaks in the running 15-year trends centred around 2000 reflect recovery from the Pinatubo eruption in 1991.

This simple comparison shows the cited statement from Karl et al. to be erroneous, but is this period ‘particularly unusual’, to use Lewandowsky et al.’s words?

The absolute value of the trend is not really relevant for such an assessment – it is far more instructive to examine how global temperatures have changed relative to our expectations, as represented by the CMIP5 simulations, for example.

Figure 1 also compares the observed trends with the CMIP5 simulations (grey shading). Note that CMIP5 also shows a recent drop in the expected rate of change for 15-year trends – the earlier peak is because of accelerated trends starting just after the 1991 eruption of Mt. Pinatubo, e.g. 1992-2006.

Observations should fall outside the simulated spread sporadically because of internal variability – we do not expect the observations to always match the ensemble mean. However, the recent observations are all continuously outside the ±1σ spread of the simulations for a lengthy period, which is obviously unusual. It is also not just global temperatures that have been unusual – the tropical Pacific sea surface temperatures & winds have also behaved well outside the simulated range.

These analyses all suggest that the early-2000s were indeed ‘particularly unusual’ – so we strongly dispute Lewandowsky et al.’s statement quoted above.

Reality has deviated from our expectations – it is perfectly normal (& indeed essential) to try and understand this difference. Oddly, Lewandowsky et al. seem to disagree, suggesting that trying to explain this event “departs from long-standing practice“, which I think is utterly bizarre and simply wrong.

Note that there are important issues with the radiative forcings used in CMIP5 (particularly solar & volcanic), which do not necessarily match the real world, especially after 2005. I hope that at least some CMIP5 models will be rerun with the updated CMIP6 forcings to determine the size of this effect. In addition, when an ‘apples-to-apples’ comparison is performed, the consistency between observations and simulations is much improved. This type of research has been valuable and is ongoing.

Finally, the issue of natural variability merits further discussion. Figure 2 shows the ratio of the change in temperature and the change in anthropogenic radiative forcing for three periods. The 1972-2001 period shows higher ratios (more warming per unit forcing) than the other periods. This period also corresponds to when the Pacific Decadal Oscillation was in its positive phase, suggesting that these variations in the Pacific have caused a large part of the difference between models and observations. As further evidence, model simulations which produce a fluctuation in their Pacific variability similar to that observed (either by chance or by design – see Fig. 1e) also better reproduce the observed fluctuations in global temperatures.

Figure 2: Anomalies in the ratio of trends in annual mean and global mean surface temperature, to trends in anthropogenic radiative forcing. The ratio of trends over each period shown in this figure (that is, 1950–1972, 1972–2001 and 2001–2014) is expressed as an anomaly relative to the trend computed over the full period from 1950 to 2014. We obtain 1972 as the end year of the big hiatus (the period of near-zero trend in the mid-twentieth century) and the choice of the 2001 start year of the warming slowdown avoids possible end-point effects associated with large El Niño or La Niña events in 1998 and 2000 (respectively).

Overall, there is compelling evidence that there has been a temporary slowdown in observed global surface warming, especially when examined relative to our expectations, which can be explained by a combination of factors. Research into the nature and causes of this event has triggered improved understanding of observational biases, radiative forcing and internal variability. This has led to more widespread recognition that modulation by internal variability is large enough to produce a significantly reduced rate of surface temperature increase for a decade or even more — particularly if internal variability is augmented by the externally driven cooling caused by a succession of volcanic eruptions.

The legacy of this new understanding will certainly outlive the recent warming slowdown.

Point of clarification: Although I am a co-author of the Fyfe et al. paper, I disagree with the sentence at the start of the ‘Claims and counterclaims’ section of the paper. My views are represented here.

About Ed Hawkins

Climate scientist in the National Centre for Atmospheric Science (NCAS) at the University of Reading. IPCC AR5 Contributing Author. Can be found on twitter too: @ed_hawkins

84 thoughts on “Making sense of the early-2000s warming slowdown

  1. If the comparison of overlapping trends were repeated, but this time controlling for the effects of ENSO, volcanic aerosols and solar forcing (c.f. Foster and Rahmstorf, 2011), and the apparent ‘slowdown’ disappeared (or were substantially less evident), wouldn’t that imply that perhaps the period is not “unique or particularly unusual”, and the apparent slowdown is plausibly attributable to factors we already know about?

    1. Zhou and Tung of the University of Washington repeated the regression analysis of Foster and Rahmstorf (2011) but used the new HadCRUT4 series. Their results are similar but they then examined the residual low-frequency oscillation remaining after the trend of 0.17°C/decade is accounted for. The residual follows the AMO Index, and when the AMO index is included as a regressor the net anthropogenic warming trend drops to 0.07-0.08°C/decade. They also conclude that “the anthropogenic warming rate during the early 20th century can be detected and it is no different than during the second half. The increasing anthropogenic aerosols likely masked the true greenhouse warming rate during the second half.”
      http://journals.ametsoc.org/doi/abs/10.1175/JAS-D-12-0208.1

      1. Hi Gavin, Robert,

        The approaches you have both outlined are certainly useful for explaining why the temperatures behaved as they did, but this does not address the unusualness issue.

        The unsual aspect is the length of time that the slowdown occurred for – i.e. the sequence of ENSO/PDO/AMO. And, remember that the other aspects in the Pacific (sea levels, winds) are also way outside the expectations.

        Just because we can explain why global temperatures did not increase as fast as expected does not make the event usual/normal.

        cheers,
        Ed.

        1. If the apparent slowdown can be explained by known sources of internal variability, then that means that there is no unequivocal evidence for a change in the underlying rate of warming, and hence no explanatory challenge that climate science must resolve. as Lewandowsky et al. put it. Has there been some analysis of how unusual the 1998 super-El Nino actually was, it seems to be a substantial cause of the apparent slowdown, but just looking at e.g. MEI it doesn’t look that unusual.

        2. Part of the problem is that different people take a different view of what the slowdown actually means/refers to, so it would be interesting to go through the different meanings and see how each is supported by the observations. My own view (largely from a statistical perspective) is that the apparent slowdown is largely the result of natural internal variability, which is interesting in its own right, but there is little evidence for a change in the underlying rate of warming (i.e. the forced response of the climate system). However, just because it seems cause by internal variability, doesn’t mean it isn’t interesting.

        3. For me the analysis of overlapping trend doesn’t make a strong case for a change in the underlying rate of warming as there doesn’t seem to be a statistical test to see whether that change is explainable by random variation (especially as the trend estimates are not independent and are trends of an underlying signal with autocorrelation).

          1. Thanks Gavin for the thoughtful comments. I agree the language used in this area has been far from ideal and many people have used the same words to mean different things.

            For example, I would agree that it is likely (though not certain) that there has not been a slowdown in the underlying rate of global warming (as defined by ocean heat content for example), but there has very clearly been a slowdown in the actual observed rate at the surface. It is the latter, and especially the relatively large difference between the mean of the simulations and observations, which is unusual and therefore requires an explanation. Explaining the wiggles is just as important as the overall underlying trend.

            cheers,
            Ed.

          2. Regarding the actual observed rate at the surface, I think even then it is important to make the distinction between the forced response of the climate system and the unforced. I don’t see any strong evidence that the forced behaviour of the climate system is doing anything unusual and I suspect that might be what Lewandowsky et al are concerned with.

            For me the model-observation comparison is a separate issue, whether there has been a change in the rate of warming (apparent or underlying) depends only on what the climate system has actually done and it independent of our expectations.

            I’m not sure that explaining the wiggles is just as important as the underlying trend (at least from the “what shall we do about climate change” perspective). If the apparent slowdown demonstrated that there had been a substantial change in the climate systems response to the forcing, that would be a very surprising finding, whereas a combination/coincidence of sources of natural variability would be interesting, but wouldn’t have such a large impact. However, “what should we do about climate change” is not the only reason to be interested in the climate!

          3. Hi Gavin,

            Probably the underlying rate hasn’t changed, but the reason for all this research has not really ever been the magnitude of the trend, it has always been about the divergence between the simulations and observations. Remember that the older datasets used at the start of all this (i.e. ~2007) showed slightly less warming (e.g. HadCRUT3). It was only much later (2012-2014) that e.g. Cowtan & Way and improved coverage in HadCRUT4 and other dataset updates, brought the models and observations closer together.

            As a further comment, consider the hypothetical case that observed surface warming trend had accelerated to 0.3K/decade, but the models said that it should be 0.5K/decade. All the same research would have been necessary to explain the difference which would have involved forcings, variability and how to compare simulations and observations. The absolute value of the magnitude of the trend has always been a red herring in my view, although that has not been expressed very coherently within the community.

            Also – those wiggles are very important for decisions about adaptation, though perhaps that is a different topic! And, interesting in their own right, as you suggest!

            cheers,
            Ed.

          4. “Probably the underlying rate hasn’t changed, but the reason for all this research has not really ever been the magnitude of the trend, it has always been about the divergence between the simulations and observations. ”

            I am not sure this agrees with my recollection of the discussion, at least on climate skeptic blogs, where it was initially about CO2 levels rising but temperatures staying flat, which was interpreted as meaning there was a problem with our understanding of the greenhouse effect (in some cases that there was no enhanced greenhouse effect at all). Commonly this was based on a plot juxtaposing some global temperature dataset and the Mauna Loa CO2 record. In that case the GCM models were not even mentioned.

    1. Thanks Judy – I do think there is some coalescing. Karl, Diffenbaugh and Lewandowsky are all on record suggesting they agree with what is in Fyfe et al.
      cheers,
      Ed.

  2. It looks as if internal variability has been underestimated all along. Of course that could apply to the pre-2000 period of greater warming just as much as to the period of lesser warming we seem to be in now.

    1. Internal variability has always been discussed, including whether it has contributed to the pre-2000 warming or not. These discussions do not always reach the media and hence the public however, but please read the older IPCC reports – there is much discussion of variability there!
      Ed.

  3. Is not the reason for all the controversy that all the expectations of gross warming and the timing of that warming were supposed to be settled and certain? That there would be no dispute over so-called slowdowns or hiatuses if we were said to be in an evolving state of knowing?

    Beyond that, the dispute has only PUBLIC merit if there is a significant difference in outcome, both in effect and in timing, based on basic understanding. The trouble with recognizing – or, more, quantifying – a slowdown due to natural variability is that natural variability is or has not been permitted to have similar radiative forcings to GHGs. Should a slowdown be acceptable from natural forces, then a “hurryup” by natural forces is a reasonable consideration – which means that climate models that hindcast as they do during the 1975 – 2000 period may be too warm. Which brings us back to the problem of declaring climate change a settled and certain subject.

    When you declared a fire in the theatre, there had better be at least smoke from a smoldering fire found after the panic. Something that looked like smoke, or was just new, hot electronics won’t cut it. Lewandowsky recognizes this and probably believes that a cry of Fire! is appropriate. He pushes back against questioning the settled and certain nature of publicly understood climate change.

    The above discussion is all related to whether we’re smelling a fire or something else, even if the something else is worth paying attention to.

    1. Hi Douglas,

      Some of climate science is ‘settled’ – emitting greenhouse gaes will warm the atmosphere with considerable risks associated with the resulting change in climate. There is very clearly a fire!

      However, there has always been an understanding that natual variability will temporarily enhance or mask the warming due to GHGs at different times, and all climate scientists will acknowledge that. This has also been made clear from the very start of the IPCC in 1990, for example, and there are papers which consider earlier periods (e.g. 1910-1940), which showed more rapid warming than might be expected, as resulting from such internal variability excursions.

      cheers,
      Ed.

  4. Hi Ed,
    Was 2015 data included in this paper (I can’t access the full paper at the moment)? If not, I wonder how the results would change if they were?

    What are the supposed advantages of the moving window approach used in your paper over ‘conventional’ change-point analysis, as used by Cahill et al 2015 (doi:10.1088/1748-9326/10/8/084002) ?

    I’m wary of methods that estimate trends by breaking data into segments (even if repeated over moving windows) – because doing so conflates uncertainly about intercepts with uncertainty about trends (each individual trend estimate also has its own intercept estimate, which will almost certainly be wrong when the prior data are ignored….especially after a large El Nino like 1988/9). I find the change-point approach of Cahill et al 2015 much simpler and more convincing. It ensures that trends are continuous function of time, but allows for changes in the rates of change. Any thoughts?
    Thanks

    1. Hi jimt,
      This paper was written before the end of 2015, so an extra point could now be added to the end, but this would not change any of the earlier trends and so the ‘slowdown’. 2015 is close to the ensemble mean from CMIP5 and 2016 is likely to be even higher. Any changepoint analysis is simpler, but not necessarily better – I would argue that there is a need to understand the physics behind any change, which is why the Fyfe et al paper focuses on understanding the role of the Pacific in defining the change dates in Figure 2 above. I prefer this over applying a statistical technique ‘blindly’ without any physics.
      cheers,
      Ed.

      1. But surely the comparison of overlapping trends is a similarly “blind” statistical analysis, a changepoint analysis has the benefit of a formal hypothesis test that takes into account the uncertainty in the trend estimates (which are consistent with the trend since 1970 suggesting there is no statistically significant evidence for a change).

        I don’t think the trends falling outside the 1 sigma range of the models is all that unusual as the overlapping trend estimates are highly correlated and it doesn’t seem to include the uncertainty in estimating the trend. If they fell outside the 2 sigma range, that would be a stronger argument.

        At the end of the day, if we look at the observations since 1998 ish, then the statistical test for a trend gives a non-significant result, but the test for a change in the rate of warming gives a non-significant result as well. This means we shouldn’t claim that there definitively has been a change in the underlying rate of warming, but based solely on those observations we can’t claim there has been warming either. Basically this means that the observations don’t provide enough information to be confident either way, so I thing we should be similarly equivocal in drawing conclusions about it.

        1. Hi Gavin,

          I think it is the case that 10 out of >250 simulations have trends which are smaller than that observed over one particular period, so that gives you an idea of how close to the edge of the distribution the observations have been (ignoring issues of radiative forcing and masking for sparse observations etc) – that is clearly unusual and requires investigation. I do not think that the definitive nature of any change in trend – for which any statistical test depends on an arbitrary choice of confidence level and choice of noise model etc – is necessarily that relevant in this situation. As a flippant example – the recent GWPF report claims there has been no statistically significant warming at all because the historical timeseries is consistent with a trendless ARIMA model! The confidnce levels in any statistical test depends on how wide you are prepared to cast your net in terms of an acceptable model.

          It is clearly more likely than not that the trend reduced, according to the observations we have, and that this trend was much lower than expected as defined by the raw CMIP5 ensemble.

          cheers,
          Ed.

          1. “The confidnce levels in any statistical test depends on how wide you are prepared to cast your net in terms of an acceptable model. ”

            While it is true that the confidence levels (or alpha in a NHST) depend on what Bayesians would regard as prior information, it is not very often that a one sigma confidence level is considered appropriate in suggesting that something really unusual has happened. This seems rather too low a hurdle.

            “It is clearly more likely than not that the trend reduced, ”

            Clearly the trend has reduced, but it is not clear that this is not the result of random variation. For me this is why it makes sense to discuss the model-observation difference (and its likely cause) separately from the question of whether there has a slowdown or not. We don’t need a change in the rate of warming to have a model-observation difference. If the models are running too warm then that would be eventually picked up even without a change in the rate of observed warming. Combining the two questions seems to unnecessarily complicate things as far as I can see and makes it easier for people to talk past eachother.

          2. Just to be clear, as somebody who works in statistics, I find physics much more compelling, and the more physical validity a statistical model has, the happier I am likely to be with it. However, when using statistics, it is best to use the statistical model that most directly answers the question as posed. If the question is “has there been a slowdown in the rate of warming”, then something like a changepoint analysis is likely to provide the most direct answer to the question. In this case, the answer seems to be that there may be a slowdown, but there isn’t sufficient evidence to be confident according to the traditional 95% threshold. That doesn’t mean there isn’t something interesting going on or that we shouldn’t investigate it, just that we shouldn’t claim unequivocally that there has been a slowdown.

      2. Hi Ed
        Its not just about statistical significance in the traditional sense. An information theoretic or Bayesian approach finds very little evidence for a change in trend since ~ 1970 in any global temp data set (see e.g. Cahill et al 2015). You actually have to impose a very strong and narrow prior in favour a trend change (limiting the range of possible trend changes to post 2000, or reducing the prior magnitude of any change to be very small ) to find any support at all for a change in linear trend using Bayesian change-point models (you can try yourself here…https://tanytarsus.shinyapps.io/changepoint/..NB it often takes 10 – 20 secs to load). Fair enough if you have that strong prior based on something other than the temperature data, but from a statistical perspective the evidence for a recent slow-down is very thin (see also https://tamino.wordpress.com/2016/02/25/no-slowdown/ ) – even for those of us with little time for null hypothesis tests.
        I’m all for trying to understand short term variation, and any divergence from climate model expectations, but it seems this paper has used some fairly dubious statistical methods to argue there is anything unusual about the recent trends.

  5. Any insight from anyone why the variabilities are so different in the two temperature populations in the Fyfe, Gillett, and Zwiers paper on this subject, one population from HadCRUT4, the other from the ensemble of climate models? FGZ take the difference in means standardized by pooled standard deviation to indicate there is a significant difference. Were the variability in the HadCRUT4 ensembles (hypothetically) larger, it wouldn’t be as significant.

    When I saw that, I was and am suspect of the comparison. Can’t see why the first two moments oughtn’t be expected to agree, or at least the second. Maybe there’s some good explanation …

    Sorry, haven’t read the recent “2000”s paper … Don’t get NATURE.

    1. Hi Jan,
      My understanding is that the two distributions are representing very different things – the HadCRUT4 ensemble only samples uncertainty in the existing observations in the one real world realisation. The models sample the range of all possible realisations so must necessarily be much broader.
      cheers,
      Ed.

      1. Thanks, Ed.

        I considered that as one possibility. If so, though, it seems to be that to do the comparison, that is to do the t-test, the one-world realisation needs to be adjusted for the fact that it is just a one-world realisation.

        If this is not done, then the standard deviation used to measure the separation, even if the pooled is used, would be improperly small.

        On the other hand, if the one-world realisation were adjusted upwards, then the separation between the HadCRUT4 observations and the climate models result would not be as dramatic, and the hiatus not as pronounced.

        1. Hi Jan,
          Normally the observations in this type if comparison are shown as a delta function as there is normally no uncertainty used, so the Fyfe approach is a step forward. And, I think it depends on the question being asked. If the question is: how far has the real world *as we have experienced it* deviated from our expectations, then the Fyfe approach is the right one, in my view.
          cheers,
          Ed.

          1. Thanks, Ed. My only additional comment is that credibility could be assigned either way. That is, the context in FGZ and elsewhere is that model ensembles aren’t tracking observations. If, indeed, model ensembles represent “our expectations”, then absence of likelihood can be assigned to observations. There remains a hiatus in that argument, but there’s no implication that models are broken.

  6. Given that satellite data is available for most of the period considered in the paper, why is it ignored in the analysis?

  7. Hi Ed, congratulations on the paper. I certainly agree with its conclusion that there has been a slowdown in the rate of global surface warming.

    However, I take issue with some of the other claims in the paper, that are presented as factual. For instance, this one about the causes of the so-called big hiatus from the 1950s to the 1970s:
    “During this period, increased cooling from anthropogenic sulfate aerosols roughly offset the warming from increasing GHGs (which were markedly lower than today).”

    This is highly misleading; it is untrue by a long way according to the IPCC best estimates of total anthropogenic forcing from aerosols and greenhouse gases. (I assume the wording wasn’t intended to exclude anthropogenic non-sulphate aerosol forcing, as that would be disingenuous.)

    AR5 has aerosol forcing changing by -0.34 Wm-2 between 1950 and 1975, whilst forcing from long-lived GHG increased by 0.60 Wm-2, with the short-lived GHG ozone adding a further 0.12 Wm-2, making total GHG forcing of +0.72 Wm-2. So total aerosol forcing offset 47pc of the total GHG forcing increase. Over 1950-1979 it offset 56pc, over 1950-72 56pc, over 1955-1975 42pc and over 1955-1972 51pc. Moreover, there is no good evidence that the response to aerosol forcing on a multidecadal timescale is greater than that to this mixture of GHG.

    Therefore, total anthropogenic aerosol forcing only offset about half the warming from greenhouse gases over the 1950s to the 1970s, on the IPCC’s best estimates in AR5. Moreover, since AR5, estimates of aerosol forcing strength have tended if anything to be reduced.

    Another claim in the paper that I also think is highly misleading is this:
    “Research has also identified a systematic mismatch during the slowdown between observed volcanic forcing and that used in climate models.”

    Whilst this is literally true (I gather that most models have zero volcanic forcing post ~2006, thus giving their simulated warming a boost), it ignores mismatches in the other direction for some other forcings. So far as I am aware, there is no good evidence that overall differences between model-assumption and actual forcing changes contributed to the hiatus/ warming slowdown.

    The thorough Outen et al. 2015 paper (http://dx.doi.org/10.1002/2015JD023859) found essentially zero difference between the change in total forcing in the NorESM model over 1998-2012 when substituting best recent observational estimates of changes for those per CMIP5 historical forcings extended with the RCP8.5 scenario. Although this would not necessarily be true for forcings used in other models, I have not seen other research that considers changes in all forcings, rather than just cherry-picking volcanic or one or two other forcings.

    1. Hi Nic,
      As you know there is a lot of dicussion about efficacies of the different forcings – but physically it makes sense that aerosols have a higher efficacy due to their spatial distribution. But I agree that the size of the effect is still in debate. There is also a strong seasonal difference in cooling from aerosols.
      Also, Figure 2 above shows the probable influence of the PDO in offsetting some of the GHG warming in the big hiatus. There may have been an AMO influence too.
      cheers,
      Ed.

      1. Hi Ed,
        The AR5 aerosol forcing values I quoted are for ERF. There is no good evidence that, when measured by ERF, aerosol forcing has a higher efficacy than CO2.
        Hansen (2005) estimated an efficacy of 0.99 for GISS ModelE when using his Fs measure, which is almost the same as ERF. Marvel et al (2016) estimated a (transient) ERF efficacy for aerosols of 1.03 in GISS ModelE2, after correcting the F2xCO2 error that I raised. The Ocko et al 2014 analysis shows an efficacy below one for aerosol forcing (Table 2).
        Shindell’s 2014 study claimed that aerosols did have a higher efficacy, but inter alia did not use accurate forcing information. Piers Forster’s 2016 review (doi: 10.1146/annurev-earth-060614-105156) reported that he had found no evidence for aerosols having higher efficacy than CO2 in CCSM4 or HadGEM2. He described Shindell’s conclusions as speculative.

  8. Hi Ed,
    Good post. It is quite obvious that there has been a slowdown in the real world compared to the model average, probably due to natural variations that have sent relatively more heat deep down into the oceans for a while. I believe that Foster and Rahmstorf 2011 and the “pacemaker” experiment above demonstrates that ENSO variations is the main cause.
    You mentioned above that the comparisons were not quite apples to apples, i.e no SST in the models like in the observations. I can’t change the models, but maybe Gistemp dTs is more model-like, being the only observational index that tries to estimate global 2 m air temps (stretched very thin over oceans of course).
    Anyway, I pasted the 15-year trends of Gistemp dTs (green) into Figure 1c above:
    https://drive.google.com/file/d/0B_dL1shkWewaU2NUbXFENTc4dVU/view?usp=docslist_api
    A better fit with the models perhaps, fewer excursions outside the grey bounds. James Hansens old dTs-index , once upon a time made for model comparisons, is not that bad..

  9. Hi again Ed,
    I forgot to mention, all observational indices used in figure 1 have an Arctic “cooling bias”. Hadcrut4 due to poor coverage, Gistemp due to the unfair GHCN3 down-adjustment of Arctic stations, and Karl et al is obviously not the version with Arctic infill.
    Indices that avoid those Arctic biases, BEST and Cowtan&Way, have 15-year trends that peak with 0.32 C/decade after Pinatubo, and I believe that it is within the one-sigma bound of the CMIP5’s in figure1.
    Taking all biases into account, the recent slowdown becomes less unusual, and there is plenty of room in the steadily rising ocean heat content, for any “missing” atmospheric heat.

  10. Hi Olof,
    Agreed that the observational biases matter – see the ‘apples-to-apples’ link in the post. But, much of the criticism of scientists studying the slowdown has forgotten that efforts such as Cowtan & Way were motivated by the slowdown itself. The earlier datasets (e.g. HadCRUT3) had a more marked slowdown – additional data and improved methodologies has helped this.
    cheers,
    Ed.

  11. It is a corruption of language, and reasoning, to describe a discrepancy between models and reality as a slowdown to describe what has happened.

    In reality, it is the models that have shown increased warming and it is they that are discrepant.

  12. These plots of actual and model “anomaly” verse time are all an optical illusion. The backfilled dips for volcanic activity make it appear like the models are more accurate than they are. The correct thing to do would be plot the residuals of the model against observation over time.

  13. Ed, Fyfe et al. says the following: “Using this more physically interpretable 1972-2001 baseline, we find that the surface warming from 2001 to 2014 is significantly smaller than the baseline warming rate.”

    How was the significance of the trend description established? I didn’t see an explanation in the paper.

    1. Hi Martin,
      We do not test the statistical significance of the trends themselves because it is not relevant for the questions we are asking. In addition, you then have to make an arbitrary choice about significance levels and noise model.
      Thanks,
      Ed.

      1. Thanks, Ed. I meant to ask about the trend difference, but apparently my brain decided that words are interchangeable.

        What I’m gathering from the recent discussions here and elsewhere is that (disregarding the sceptics) the difference between those who argue for and against a slowdown is that the ‘against’ people are talking about understanding the distribution from which the observed values are drawn, while the ‘for’ people are talking about understanding the specific realisation that occurred. Does that sound like a fair description?

        1. Mainly any disagreement seems to be about whether the slowdown is significant (statistically or otherwise). The apparent divergence between models and observations says ‘yes’ to me – and the research undertaken since this first started getting discussed explains why – a combination of forcings, unusual variability and how the comparison is done.
          Cheers,
          Ed.

  14. Hello, Ed. I’m curious about your Point of clarification: ‘Although I am a co-author of the Fyfe et al. paper, I disagree with the sentence at the start of the ‘Claims and counterclaims’ section of the paper. My views are represented here. ‘

    The first sentence in the ‘Claims and Counterclaims’ section is:

    Recent claims by Lewandowsky et al. that scientists ‘turned a routine fluctuation into a problem for science’ and that ‘there is no evidence that identifies the recent period as unique or particularly unusual’ were made in the context of an examination of whether warming has ceased, stopped or paused.

    What do you disagree with there? Or did you mean the next sentence:

    “We do not believe that warming has ceased, but we consider the slowdown to be a recent and visible example of a basic science question that has been studied for at least twenty years: what are the signatures of (and the interactions between) internal decadal variability and the responses to external forcings, such as increasing GHGs or aerosols from volcanic eruptions?”

    1. The first of the two sentences was far more critical of Lewandowsky et al in an earlier version of the paper. There is plenty of evidence that the period was unusual in whatever context.
      Ed.

  15. Off topic comment: lately there is a paper in Science by Clement et al. showing that models can reproduce AMO like patterns without a role of the ocean. Would you do a post on that topic?

      1. Ed, I think the most important words in this sentence from the abstracts ” Here we show that the main features of the observed AMO are reproduced in
        models where the ocean heat transport is prescribed and thus cannot be the driver” of Clement et al. are: “in models”?

  16. Probably the underlying rate hasn’t changed, but the reason for all this research has not really ever been the magnitude of the trend, it has always been about the divergence between the simulations and observations. Remember that the older datasets used at the start of all this (i.e. ~2007) showed slightly less warming (e.g. HadCRUT3). It was only much later (2012-2014) that e.g. Cowtan & Way and improved coverage in HadCRUT4 and other dataset updates, brought the models and observations closer together.

  17. Ed, perhaps you read the blog of “tamino”, there is a recent post of Stefan Rahmstorf, Grant Foster (tamino himslef) , and Niamh Cahill. They conclude: “that the early-2000s global warming slowdown is greatly overstated by Fyfe et al. Their central claim that “the surface warming from 2001 to 2014 is significantly smaller than the baseline warming rate” (1972-2001) is falsified by the statistical analysis discussed here.”
    In my eyes it’s a harsh(ly) critique of your paper. So do you have a response to S.R. &Co? Or one of your co-authors?

    1. Thanks Frank – Rahmstorf et al submitted a similar Comment on the Fyfe et al study which was reviewed, along with a Reply. Nature Climate Change decided not to publish the exchange. Obviously there is some remaining disagreement between author teams! I hope the Fyfe reply also appears somewhere.
      Cheers,
      Ed.

  18. Congratulations on a very thorough, reasonable, balanced and also brave presentation, both here and in Fyfe et al. As someone who is not a climate scientist, my comment might seem naive, but I would appreciate a response.

    I think the problem many skeptics like myself have with the climate change orthodoxy is that there would seem to be a circular argument at its heart. In order to claim that increased levels of global CO2 emission produce significantly elevated levels of global warming, it’s necessary, one would think, to establish, at the very least, a clear correlation between them. Yet, as you yourself point out, when we observe the actual raw data, no such correlation is evident. In fact I’ve noticed three distinct periods where correlation seems seriously lacking: first, during the period 1910-1940, we see a runup in temperature that’s far greater than the much more subdued increase in atmospheric CO2 during that time; second, we see a cooling period from ca. 1940-1979, while CO2 emissions are beginning to soar; finally, as argued in Fyfe et al., we see a distinct slowdown in global temperatures during the 21st century while CO2 levels soar to an all time high.

    In all three cases, the discrepancies have been balanced and thus accounted for by climate scientists claiming that “natural forcings” of one sort or another are responsible. But such arguments are based on the notion that the relation between CO2 and significant temperature rise has already been established and all that’s needed is a plausible explanation for the discrepancy.

    I’m sorry, but I can’t see how such a relation could be established without a clear correlation to begin with. Do you see my problem? The attempts to account for the lack of correlation imply a knowledge of the relationship that could only be established by a correlation that has not presented itself in the data.

    I’m aware, of course, that certain physical relations have been established. We know CO2 is a greenhouse gas and as such will produce some warming. But that in itself is not sufficient to establish CO2 as a significant contributor to the warming that we’ve seen. The only way to do that is to establish a clear correlation and without a clear correlation one cannot claim CO2 as a major forcing that’s somehow been masked by other factors.

    1. Hi Victor,
      There is a strong correlation between greenhouse gas emissions and global temperature over the past 160+ years. But there are many other factors which are important for the precise evolution of temperatures – sulphate aerosols, volcanic eruptions, solar activity, natural variability etc. We would never expect any particular decade to always show a clear relationship with CO2 because of these other confounding factors – it is the long term view which shows a clear relationship, as well as being consistent with the basic physics.
      Hope this helps,
      Ed.

      1. Thanks for the thoughtful response, Ed. It’s refreshing to communicate with a climate scientist gracious enough to respond to skepticism without resorting to personal attacks. I’m also impressed by your willingness to report your findings honestly, regardless of whether they call certain mainstream dogmas into question.

        Nevertheless, I’m afraid your explanation doesn’t help. You seem to be simply restating the position I’ve been having trouble with. If you want to claim a correlation between CO2 emissions and global warming, you actually have to produce such a correlation from the data, no? Instead, you simply assert it, as though no hard evidence is necessary. The fact that one can come up with reasons why the raw data itself does not show a clear correlation cannot in itself produce the desired correlation. In the parlance of basic scientific principle, such explanations are referred to as “saving hypotheses.” In other words, it’s always possible to come up with some sort of explanation that argues against the falsification of a theory that’s for all practical purposes, been falsified. In principle, there are an infinity of such explanations, which is why Occam’s razor is so important. If you can always come up with some scenario that explains away any objection, then your theory is not falsifiable, i.e., not testable, which means it is not a truly scientific theory.

        You refer to a “long term” correlation going back over 160 years, which would take us back to 1856, long before the burning of fossil fuels was anywhere near as intense as it is today. Consulting a typical graph covering roughly the period from then to now (http://www.global-warming-and-the-climate.com/images/150-yr-global-temperatures.gif), I see no sign of any upward trend in temperature from the 1850’s to 1910. Then suddenly from 1910 through ca. 1940, there is an abrupt rise in temperature, at a time when CO2 emissions were still a relatively minor factor — followed by an abrupt decline, which in turn is followed by a period of relative stasis through the mid to late 70’s. It’s only in the last 20 years or so of the 20th century that we see anything close to a clear correlation between CO2 and temperature.

        So where do you see the “long-term trend”?

        Now one can always produce a “statistical” trend during this period by choosing 1856 and 2015 as your endpoints and ignoring everything that happened in between. That’s what most scientists would call an “artifact” produced solely by your methodology.

        1. In the scientific world correlation does not imply causation.

          I too am interested in the statistical evidence of the long-term trend. The biggest problem is that actual measured data is not available to determine long term trends; therefore, scientists create models based on hypothetical data.

          Ed however, has taken actual data (which is excellent) and created this wonderful graphic. However, I am not yet convinced that the causation for that rise in temperature is exclusively the result of human activity. To be convinced of such, I would like to know where in the scientific literature the baseline for “natural” emissions has been determined? Natural emissions are going on every day just like man-made emissions. How are these quantitatively differentiated? Without a distinction between what is natural and what is “not natural”, any conclusions about data sets is irresponsible.

          I appreciate the genuine dialogue here with the climate scientist Ed. I hope that my tone is not misinterpreted as brash, I just have some honest questions that if answered, can lead me to a better understanding of this.

  19. “In the scientific world correlation does not imply causation.”

    True. But the opposite IS the case. Causation implies correlation. With no correlation it’s illogical to posit causation.

      1. Thanks for the link, Ed. I’ll be taking a close look at this article, which looks very interesting.

  20. I’ve read the article you linked to and studied the graph(s). And again I want to thank you for your patience and your professional attitude. Unlike most of the “warmists” I’ve encountered, you actually behave like a real scientist, rather than an ideologue, and that’s much appreciated.

    However, I must say I am still far from convinced. For one thing, the author makes it sound like the skeptics (aka “denialists”) are the ones with a theory to prove, while in fact the opposite is the case. When you propose a new theory, especially a theory with such profound implications for the future of the entire human race, the burden of proof is on you to establish your theory, rather than those questioning its validity. The author claims that “In order to end the scientific part of the debate—to reach “climate closure”—it is therefore necessary to demonstrate that the giant fluctuation theory has such a low probability that we can confidently dismiss it.”

    Well I’m a skeptic and I have no such theory, thus nothing to prove. My skepticism is not based on your theory conflicting with mine, but what I perceive as the lack of convincing evidence in favor of AGW as a significant factor in predicting the future of our climate and our weather. I have nothing to prove. You (and your associates) do.

    The graph to which you refer strikes me as extremely misleading, even perverse. Of course I realize, as you say, that the x-axis represents CO2 forcing and not time, but the effect is nevertheless to minimize the periods where there was no correlation and maximize the periods where there was. For you it might be a reasonable way of representing the data, but to me it’s a way of distorting the data to give the impression of a clear linear correlation where in fact there is none. Sorry, but it looks like a classic case of bending the evidence to support the theory. The ONLY period where there was a clear correlation, i.e., 1976-1998, a period of only 20 years, dominates, whereas the significantly longer period from 1944-1976 (32 years) is squeezed into a relatively small space. If the diagonal line is meant to represent a long term trend, then it’s obviously an artifact as an accurate representation of temperature over time (as displayed on numerous graphs from many different sources) shows no long term trend.

    A very straightforward and simple scattergram representing the relation between CO2 and temperature from around 1958 through ca. 2013 was produced by Danley Wolfe: https://wattsupwiththat.files.wordpress.com/2014/09/clip_image006_thumb2.jpg?w=1212&h=897 As you can see, the only period of correlation was between the mid-70’s and the late 90’s, again a period of roughly 20 years. Prior to that period and after it, the scatter is clearly random.

    The author claims that his “basic argument can be understood by the lay public.” Well I’m part of that public and I’m sorry but a statement such as “We can confirm that this is reasonable since the average amplitude of the residues (±0.109°C) winds up being virtually the same as the errors in 1-year global climate model hindcasts . . .” tells me nothing whatsoever.

    What I do hear over and over from certain quarters is that AGW has to be warming the planet, because what else could be doing it? And that seems to me the most reasonable take on the matter. But it’s far from being a proof, or even strong evidence. What is it that so rapidly warmed the planet from 1910 – 1944? Certainly not CO2, which was increasing at only a very small amount back then.

    Sorry to be such a hard case, Ed. Thanks for listening.

Leave a Reply

Your email address will not be published. Required fields are marked *