I’m not sure, but I think it is a fairly minor change

]]>Hi Troy, I’ve now done a more accurate assessment using a computed noninformative prior, and find a rather larger increase in ECS at the 50% and 83% when adding the paleo information, and a modest increase rather than a significant decrease of the 95% point. But the effects of adding the paleo information are still fairly modest.

]]>Thanks Nic, that is a pretty comprehensive dissection of some of those older studies, and I also re-read your “Sensitive Matter” paper as well and saw your criticisms WRT the AR5 studies.

Regarding the affect of a palaeo prior on your LC14 results, I am surprised that it only increased most of those percentile estimates by ~0.1 K.

]]>Hi Troy

You ask what I think the reason is for the lack of low estimates of sensitivity prior to the last few years. I can tell you why the instrumental period ECS estimates cited in AR4 were all high, as follows:

Andronova and Schlesinger (2001) was affected by an error in its computer code that substantially biased upwards its estimate for ECS – it seems by ~0.9°C for the case with a realistic set of forcings included.

Gregory et al. (2002) used a very high estimate of aerosol forcing, derived from an unsuitable GCM-based detection & attribution study, and also the erroneous Levitus et al 2000 ocean heat content dataset (an “arithmetical error” – reported by James Annan but no action taken).

All the other studies apart from Forster and Gregory (2006) had results stated using unsuitable uniform priors for ECS and, often, also for ocean diffusivity. That gave them very fat upper tails as well as increasing their median estimates. If the Forster and Gregory (2006) results had not been restated on that basis, the median ECS estimate would have been 1.6 rather than 2.3°C.

Knutti et al. (2002) used a very weak statistical method and found that their observational constraints did not enable a well constrained ECS estimate to be produced; they did place a lower limit of 1.2°C on ECS, but that figure was biased upwards by use of the same erroneous OHC dataset as Gregory et al. (2002).

Frame et al (2005): statistical errors, ocean heat content miscalculation, use of GHG attributable warming derived from an unsuitable GCM-based detection & attribution study. See Lewis (2014) J Climate

Forest et al (2002 and 2006): multiple statistical errors, poor experimental design, use of data ending in 1995: see Lewis (2014) J Climate, where a revised ECS best estimate of 1.6 C waas obained when using the full model simulation runs to 2001, better experimental design and correct statisitical methods.

I made a crude attempt a few months ago at seeing how much a paleo estimate fitted by a shifted lognormal distribution with 10-90% range of 1-6 K (as per AR5) and a median of 2.86 K (my notes indicate this related to AR5 Fig. 10.20b and was a mean, but I’m not offhand sure what of) alters the ECS median and ranges from Lewis & Curry 2014. Roughly speaking, it increased the 5%, 17%, 50% and 83% points all by about 0.1 C, but significantly reduced the 95% ECS point. However, I didn’t estimate a noninformative prior distribution for the combined estimate, so these results may be some way out. I must try doing so.

I agree that AR5 used crude methods of combining various lines of evidence, and evidence from different studies within each line of evidence, and did not adequately devalue ‘low information’ study results..

]]>Hi Nic,

Given #1-3, what do you suppose the reason is for the lack of low estimates of sensitivity prior to the last few years?

I suppose I am being a bit cavalier in describing the method above as Bayesian updating (and I confess that, short of some extensive study, your paper is over my head). If I understand, your point is essentially that some sort of synthetic instrumental likelihood function is going to look different than that synthetic instrumental PDF (whereas the paleo likelihood function and PDF could be relatively the same), so which is chosen as the prior will matter. That being said, my expectation is that the uncertainty resulting from these differences is relatively minor compared to the uncertainty in choosing which lines of evidence are independent, which studies go into each line of evidence, and how to create a synthesis of each line of evidence, but perhaps that is incorrect.

I would be curious to see the results of your exercise in combining various lines of evidence. From a glance at the assessment of the last IPCC AR5 report, my impression is that they tended to use an “inclusive” assessment that was more along the lines of an average of the study distributions, rather than taking the product of independent methods. This tends to grant larger probabilities to higher-end sensitivities, and IMO does not properly devalue “low information” study results.

]]>Thanks Nic, interesting…any idea of the degree of the change? I am assuming we are not talking about anything that drops the 280->560 concentration change below 3 W/m^2?

]]>1. The recent lower instrumental estimates do not generally depend on the hiatus. Aldrin 2012, Lewis 2013, Otto 2013, Skeie 2014, Lewis & Curry 2014 all produce much the same median ECS estimates when using data ending at 2000 (Aug 2001 for Lewis 2013; it uses no later data).

2. The reduction in consensus (IPCC) aerosol forcing estimates impacted the Otto 2013 and Lewis & Curry 2014 ECS estimates, but I beleive not any of your other instrumental estimates. Indeed, the higher AR4 aerosol forcing estimates used as Bayesian priors increased the Aldrin 2012 and Skeie 2014 ECS estiamtes beyond where their observational data gave a best fit.

3. The more constrained OHC estimates may have reduced ECS uncertainty ranges, but I do not think they have reduced the median ECS estimates from instrumental studies.

More importantly, your method of combining instrumental and paleo estimates isn’t really valid. Bayesian updating requires the multiplication of a (posterior) PDF from one source – used as a prior – with an independent likelihood function (not a PDF) from a different source. One could argue that the roughly Gaussian paleo estimate warrants a uniform prior so that its likelihood function would have been the same share as the PDF.

But in any case Bayesian updating isn’t valid in a case like this, in my view. It provides an ill defined result that in general will depend on which source is used to provide the PDF and which source to provide the likelihood function. See my 2013 paper Modification of Bayesian updating where continuous parameters have differing relationships with new and existing data. arXiv Rep., 25 pp at http://arxiv.org/ftp/arxiv/papers/1308/1308.2791.pdf

BTW, minor point, in IPCC parlance “very likely” means 90% probable (5-95%) not 95% and “likely” means 66% (17-83%) not 68%.

]]>Sorry, I meant increase more slowly than logarithmically

]]>Apparently the standard logarithmic relationship assumed between CO2 concentration and forcing is likely to be revised to make forcing decrease more slowly than logarithmically, but only modestly. A weaker CO2 band that is not saturated becomes more significant.

]]>As I did to the above commentor, I would ask you, what value do you then expect for the forcing from a doubling of CO2? I would be very surprised to find such a simple mistake, as the LBLRTMs have been extensively tested again databases like HITRAN, and such a mistake would affect applications far more ubiquitous than that those in climate science. Regarding Beer-Lambert, Science of Doom has a good post on why including emission is necessary if you are going to attempt to model the radiative forcing vs. concentration relationship: http://scienceofdoom.com/2011/01/30/understanding-atmospheric-radiation-and-the-%E2%80%9Cgreenhouse%E2%80%9D-effect-%E2%80%93-part-three/

]]>