The importance of reliable uncertainty estimates

Reliable estimates of uncertainty are arguably more important than the actual value being quoted. I recently came across a classic example in astronomy.

From the late 1920s, estimates have been published for the ‘Hubble Constant’ – a measure of how fast the Universe is expanding. It is named after Edwin Hubble who first made the measurements required. Simply, Hubble’s Constant relates the distance to a distant galaxy to how fast it is moving away from us:
v = H0 d
where the velocity (v) is measured relatively easily using the ‘redshift‘. The distance (d) is much harder to measure, producing uncertainty in the estimate of Hubble’s Constant (H0).

When Robert Kirschner examined how the estimated value of H0 had changed with the date of the scientific publication, he found that the early estimates tended to be far too large, but more importantly that the uncertainties were vastly underestimated (see Figure). A modern estimate of H0 is around 70 km/s/Mpc.

In climate we try and measure the climate sensitivity – a similar constant relating the equilibrium global temperature increase to a change in radiative forcing, and its associated uncertainty. We should continue to remember that estimating the uncertainty can be more important than the central value we obtain.

It may also be interesting to examine how the estimates (and uncertainty) in climate sensitivity have changed over time, since it was first measured in 1896 by Arrhenius.

Hubble constant with time

Estimates of the Hubble Constant by publication date. From Kirschner (2003, PNAS).

About Ed Hawkins

Ed Hawkins (twitter: @ed_hawkins) is a climate scientist in NCAS-Climate at the Department of Meteorology, University of Reading. His research interests are in decadal variability and predictability of climate, especially in the Atlantic region, and in quantifying the different sources of uncertainty in climate predictions and impacts. Ed is a Contributing Author to IPCC AR5 and a member of the CLIVAR Scientific Steering Group.
This entry was posted in uncertainty. Bookmark the permalink.

14 Responses to The importance of reliable uncertainty estimates

  1. I like the way the graph seems to approach the ‘correct’ value monotonically, give or take a little error-related wobbling. I’ve read elsewhere that there is a tendency for the first estimate of a value to be (a) wrong and (b) sticky such that subsequent estimates are torn between agreeing with what has been published and, you know, with the actual evidence.

    One thing it suggests to me is that estimating uncertainty is hard based on a single analysis and that it is far easier to get a general idea of uncertainties by making repeated estimates of the same value in as many different ways as possible.

    I wonder if anyone has ever gone back to see if they can get the uncertainty estimates ‘right’ retrospectively? Work out what was missing and why they overlooked it.

    • Ed Hawkins says:

      Thanks Nebuchadnezzar – am sure there are some social ‘sticky’ aspects as you suggest, which we must try to avoid.

      I like your idea of going back to re-examine the uncertainties – in this case the distances to the galaxies they derived in the early estimates were biased far too low. My guess is that it was probably a case of ‘unknown unknowns’ – they couldn’t estimate the uncertainty reliably because their understanding of the Universe was not quite right. Can any amount of clever statistics account for that?

  2. There’s a similar story about the speed of light told very nicely in “Alice and the Space Telescope”. It’s generally a good idea to mentally double (as a minimum) the size of your error bars before reaching any conclusions.

  3. Nick Barnes says:

    There’s an excellent poster doing the same for a large set of physical constants (c, G, h0, e, etc). Can’t find the PDF now, but here’s a paper with some related material:
    http://www.hss.cmu.edu/departments/sds/media/pdfs/fischhoff/Henrion-Fischhoff-AmJPhysics-86.pdf

  4. Pete Newman says:

    That curve from astronomy hides another interesting point applicable to climate science (and I’ve now worked on both sides of the divide).

    In the 1970s, there was a bit of a fight going on between Gerard de Vaucouleurs and Allan Sandage over H0. De Vaucouleurs favoured H0 about 90 km/s/Mpc, Sandage favoured as low as 45 and their error bars didn’t even overlap (perhaps the two points around 1973?) . This was despite them starting from essentially the same data set! Michael Rowan-Robinson covered this well in his 1985 book, The Cosmological Distance Ladder, which, incidentally, was pretty much responsible for me getting into astronomy in the first place.

    The lesson for climate science, and any other science for that matter, is that uncertainty is not just about measurement errors but also the uncertainties introduced by choice of analysis. In the end, the differences between de Vaucouleurs and Sandage were largely due to biased selection of data, combination of errors in different sources and systematic versus random errors.

    I think it’s vital in climate science, as in any other science, that we use all the data we have in all possible analyses. If and when we get the same answers from different methods and different cuts of the data, we can be a little less uncertain about the conclusions we draw.

    Good article, Ed, and thanks to Richard Betts for tweeting a link to it.

    • Ed Hawkins says:

      Thanks for the interesting discussion Pete – I agree with your climate assessment.

      PS. I have also worked on both sides of the same divide. For me it was Marcus Chown’s Afterglow of Creation.

    • “I think it’s vital in climate science, as in any other science, that we use all the data we have in all possible analyses. If and when we get the same answers from different methods and different cuts of the data, we can be a little less uncertain about the conclusions we draw.”

      It also suggests that when trying to re-estimate a value, one ought to attempt to start from as different a base set of assumptions as possible. Or, conversely, when looking for the take down, to focus on the weakest assumption shared by the greatest number of authors on a subject.

      • Pete Newman says:

        Neb– “one ought to attempt to start from as different a base set of assumptions as possible” is what I meant by “different cuts of the data”. So I think we agree! –Pete

  5. Richard Berler says:

    I was happy to see Dr. Trenberth suggest that as more of the climate system is handled explicitly in the models, the range of scenarios that the models may be able to replicate may become larger/more complete leading to an apparant INCREASE in uncertainty in the projection of the future course of the climate system. I take this to mean that current analysis of uncertainties may be too reliant upon the assumption that present ensembles can produce the full range of solutions…

  6. Richard Berler says:

    One other thought…I’m particularly interested in how realistic the synoptic climatology that is produced by the models. If the synoptic climatology is not reproduced with accuracy and with the full range that occurs in the real world, I would have difficulty in taking the model outputs with confidence.

    • Ed Hawkins says:

      Hi Richard,
      Yes, there is a good chance that the uncertainty in climate projections will increase as more components of the climate system are handled explicitly – particularly when adding more details of the carbon cycle. However, it is widely acknowledged that the current range of climate models may underestimate the true uncertainty, especially on regional scales where the synoptic variability characteristics are good in some regions and not so good in other regions.
      Ed.

  7. James Annan says:

    Hi Ed,

    Can you clarify what you mean by “true uncertainty”? This phrase (which I acknowledge is fairly common) never seems to make much sense to me. When uncertainty is primarily (or indeed wholly) epistemic, that is, it is a matter of the ignorance of researchers, there is no such thing as “true uncertainty”.

    This is somewhat relevant:
    http://www.agu.org/pubs/crossref/2011/2011GL049812.shtml

    • Ed Hawkins says:

      Hi James,
      Thanks – fair point – I will try and avoid this phrase in future. It would be nice to think that we could provide uncertainties, for e.g. regional precipitation, that reliably include the “truth”, even given our ignorance. Whether this is possible is another matter.
      Will have a look at your paper.
      cheers,
      Ed.

  8. Alexander Harvey says:

    As I recall, the distance estimate ultimately rests on the period-absolute luminosity relationship for Cepheid variable stars. Measure the period and the apperent luminosity and one can estimate the distance. This is only gets you as far as one can resolve individual stars which I thnk is only as far as the local group. Knowing the distance to a near by galaxy allows one to use other obsevable features to extend the scale to more distant galaxies . However if one gets the Cepheid variable relationship wrong all the longer scales based on it are similarly wrong.

    In the 1950s, it was realised that there were two distinct populations of Cepheid variables and their luminosity-period relationship varied by a factor of 2 or a little greater. The distance scale expanded by that factor and Hubble’s constant shrank accordingly.

    The early uncertaintes in Hubble’s constant were not that unrealistic as they were based on the observable uncertainties in the Cepheid variable relationship amongst other things. They didn’t know that not all Cepheid variables were born equal.

    A question raised is how much additional uncertainty should one include for such hiccoughs. Was the existence of a second group of Cepheid variables in any way predictable or conversely was there an evidence that such an occurrence could be ruled out. That may be tractable in hindsight but somewhat more difficult in foresight.

    I may be almost unique in my view that the Climate Sensitivity is the least interesting of all the unknowable values in the lexicon. It is what it is. More important is the general relationship between what we have done so far and the response so far and a strong hunch that they are related in a fairly linear fashion.

    As I understand it, there is one practical experiment we could, perhaps will, and probably should undertake and that is to reduce sulphate pollution as a matter of some urgency and in conjunction with other measures. The sulphate cooling has a very short half-life and if the temperatures commence in taking a nasty lurch upward that would be compatable with the high end estimates, if it doesn’t that would be compatable with the low end estimates. I for one am interested to know if we have passed the threshold whereby the maintenance of some form of geo-engineering is required to meet any of the more ambitious targets (1.5-2ºC). I deliberately used the term maintenance not commencement.

    The most certain way of narrowing the estimate is to take actions that will put some curvature into the forcing history. The quickest way of doing that is to attack the concentrations with the shortest half-lifes (amongst other actions). It is interesting to note that we did this for CFCs which were once threatening significant upward curvature in the forcing history and are now doing the opposite as they decay. That effect may have been minor but the sulphate effect might be significant and quite rapid should China et al, put their minds to cleaning up smokestack emissions.

    Alex

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>