As discussed in a previous post, Forster and Gregory (2006) and Murphy et. al. (2009) use OLS regressions of temperatures versus the difference between measured TOA flux and known/estimated forcings to try and determine the overall climate feedback. As the inverse of this feedback is assumed to be the climate sensitivity, any underestimate in this climate feedback will hence lead to an overestimate in the climate sensitivity from this method.
Now, one possible issue with the method is that there are measurement errors in the monthly temperature anomalies, and OLS will lead to underestimates of the regression coefficient when there are errors in the independent variable. A possible way to combat this regression attenuation is to use total least squares, which fits the line considering errors in the independent variable as well. Of course, the trouble with using this method is that you need to have some idea of the relative variance of the “errors”, otherwise you could swing the opposite direction and get a huge overestimate.
As I don’t have a strong statistics background, implementing this Deming regression (a simple, specialized case of TLS) is rather new to me. However, one thing that is clear is that the assumed value for δ – the ratio of variance of errors in the dependent variable over the variance of errors in the independent variable – greatly affects the resulting estimate. Below, I show the Deming regression estimates for the overall feedback response (lambda, or Y, the inverse of sensitivity) based on the assumed δ. I use the HadCRUT3 and GISS datasets and the CERES net TOA flux measurements from March 2000 through December 2010 (the length of the CERES dataset):
Using OLS, which assumes no errors in the independent variable (or a δ = infinity), yields climate feedbacks of 1.16 and 1.19 W/m^2/K for GISS and HadCRUT respectively (equal to a sensitivity of around 3.2 C).
The red lines in the posts above correspond to δ = variance in Q-N divided by the variance in temperatures. There is no strong reason to suggest that this assumption is correct, but for reference, using the specified values for δ would yield estimates of 5.10 and 6.77 W/m^2/K for the climate response in those temperature sets, corresponding to tiny values of 0.75 and 0.56 C for sensitivity.
As Nic mentions in the comments of that last post, and Forster and Gregory (2006) note, the errors leading to the low correlation are not necessarily coming from measurement errors, but rather from other radiative influences. However, it appears that even if we assume the variance of errors in Q-N is much larger (say, 75x) than that of variance of errors due to uncertainty in monthly temperature anomalies, the effect is still quite noticeable (about 2.5x the climate feedback estimated in HadCRUT, 1.5x in GISS). In fact, if we assume the 0.075 C for the 1 sd in the monthly temperature errors, and then use the variance of Q-N itself as an upper bound on its possible errors, that 75x is what we get.
I should note that Murphy et. al. (2009) also uses orthogonal regressions for comparison, and these (predictably) lead to higher estimates of the response, though lower than I would expect based on my tests. At the moment, however, I’m not able to reproduce the lower results using this orthogonal distance regressions (which is basically the Deming regression above but with δ = 1, although they likely adjusted for units as well). They make a case on the grounds of cause and effect why OLS is more appropriate (surface temperature influence radiative flux at 0 time lag more than the opposite), and certainly it would seem that assuming the same errors in both variables is probably incorrect, but I fail to see why this means that we should not necessarily take into account ANY of the measurements errors in T, particularly when the errors on a monthly scale seem large relative to the monthly anomalies themselves. Of course, as I mentioned above, I am still in the process of learning these methods.
Anyhow, the script for this post is available here.