In my previous attempts to calculate feedbacks, I’ve found that typically it is the satellite temperature indices (the lower troposphere temperatures) that give the highest correlation. There are some good reasons to think that this should be the case – for example, I’ve been working with the radiative kernels recently, and if I recall correctly some 85% of the temperature feedback response comes from the layers of the atmosphere rather than the surface.
In a recent discussion at the Air Vent, TTCA brings up an interesting point. Basically, if the bulk of the feedback response is due to atmospheric temperature changes, then the regressions must be performed against those. The “regression” method I’m referring to here is simply that originally pioneered by FG06 and discussed more here. Of course, if the surface temperature variations and atmospheric temperature variations are close to one another in time, then the feedback response will be near instantaneous with the surface temperatures as well, and we should be fine. I’ve decided to take a closer look within this post.
My script and data are available here. The script is sort of a hodge podge of stuff, as is the post itself.
First, a quick look at the different temperature series anomalies (relative to the 2000-2010 baseline):
Now here’s a look at the correlations (r^2 values) between the various indices:
As should be clear, the satellite temperatures (UAH and RSS) correlate very well with one another, and the surface temperatures (GISS and HadCRUT) also show fairly strong correlation. Clearly, the same “types” (surface vs. LTT) of temperatures correlate better with each other than with those of the other type. This makes sense, but as we’ll see, one reason is because 70% of the surface is the sea surface, and atmospheric temperatures actually lag sea surface temperatures by 1-2 months.
And here is how they correlate at different lag times with the atmospheric temperatures:
Clearly, the 0 lag time does not correlate as well with atmospheric temperatures as other lag times, which suggests that the atmospheric temperatures respond to sea surface temperatures a few months later, which in turn means that using instantaneous surface temperatures to estimate feedbacks is going to decorrelate our results even further.
First, a quick look at our feedback estimate using RSS LTT. Here I am using the CERES NET TOA flux observations (N) from my cloud feedback posts, and am estimating the forcing (Q) as simply a linear change from 0 to 0.25 W/m^2 to represent the GHG increase over the period (same estimate used in Dessler10).
This 2.93 slope (W/m^2/K) feedback corresponds to a sensitivity of around ~1.3 C per doubling of CO2. Now, what happens if we simply use sea surface temperature (Reynolds in this case)?
We get no correlation, leading to a near-zero estimate the climate feedback and thus an extremely high climate sensitivity. But if we use the SST anomaly from two months earlier (the approximate amount of time for the temperature changes to dominate the lower troposphere), it is quite a different story:
A much better correlation (although not great), and once again a higher estimate of feedback. Obviously the r^2 is still not as high as either of the satellite indices, but this is to be expected if atmospheric temperatures are affected by more than simply the previous months’ SST.
Anyhow, here are the results of my runs:
Note that this does not deal with the issues from Spencer and Braswell (2010 or 2011) and Lindzen and Choi (2011) regarding forcings confounding the signal. In this case, we’re looking at a lagged time to calculate the feedback simply because that’s when the feedback will be occurring, due to the lag in atmospheric temperature response to sea surface changes.