Troy's Scratchpad

Radiation Budget and Climate Sensitivity

This page will be dedicated to the issue of measuring climate sensitivity from TOA fluxes and surface temperature measurements, originally proposed in Forster and Gregory (2006).  I plan on this being a continually updated page with edits and additions as the details unfold.  The goal is to use it as a reference when I post on this topic, without having to go through the whole history.  The most recent result of this debate is the controversy surrounding the Spencer and Braswell (2011) paper, along with the Dessler (2011) response.



Forster and Gregory (2006) took advantage of that fact that we now have satellites (they use ERBE) observing top-of-atmosphere (TOA) radiation flux to determine the overall feedback response to temperature change.  A linear model of the Earth’s radiation budget looks like this:

TOA_flux = f – ΔT*λ + X

Where f the external radiatiave forcing, ΔT is the change in surface temperature anomaly, X is the change in radiation flux that is not responding to surface temperature changes, and λ is the overall climate feedback/response parameter. The climate sensitivity is inversely related to λ, where ΔT = f / λ, and is defined as the change in temperature for the f=3.7 W/m^2 associated with a doubling of CO2. Clearly, then, the larger this radiative response (λ), the faster the system will equilibriate with respect to T and the lower the sensitivity. The overall value of λ must be positive, but given the estimated Planck response of 3.3 W/m^2/K, the common usage of “positive” or “negative” feedbacks refers to the deviation of λ from this value (somewhat counter-intuitively, a “positive” feedback actually refers to the situation where the overall λ is *less* than the Planck response, according to the set-up).

With this model, the climate feedback parameter, λ, can be estimated by regressing (f – TOA_flux) against temperature anomalies. Any underestimate due the regression will hence lead to an overestimate of climate sensitivity. The accuracy will of course depend not only uncertainties in TOA_flux, f, and ΔT (the latter of which could lead to regression attenuation), but also the magnitude of X.   Forster and Taylor (2006) essentially used this method to diagnose the sensitivity in the models from long-term (70 year+) runs.

Spencer and Braswell (2008) objected to the assumption in FG06 that X would be uncorrelated with T. They suggest that the unknown internal radiative forcings (e.g. “high-frequency stochastic variations in cloud cover”), which they call N, would necessarily be correlated with T, as these forcings would influence the surface temperature. Hence, this forcing would generally be in the opposite direction of the feedback (a net downward forcing leads to an increase in T, while an increase in T leads to a net upward response), and so while the *actual* estimate for feedback TOA flux should be (f + N – TOA_flux), the unknown nature of N would yield (f – (TOA_flux + N)) because N is included in the measured TOA_flux. The difference between the two is thus 2N. The resulting bias (as opposed to simply inaccuracy) depends on the magnitude of the influence N has on T relative to other changes in T.  This argument does NOT require clouds to be responsible for long-term climate change (although SB08 mentions possible low-frequency sources of N as well), but rather notes that even changes on the seasonal/interannual scale in N would lead to overestimates of sensitivity.

Murphy et. al (2009) continued the method of FG06, adding in CERES measurements up to 2005 as well as ERBE, and got an even higher estimate for sensitivity. It did not specifically address the issues in SB08, and they leave out the Pinatubo ERBE measurements in their estimates, which would have been the period least influenced by the SB08 issue mentioned (because the unknown radiative forcing, N, would have been small relative to magnitude of the change in T).

Spencer and Braswell (2010) expanded much deeper on the original claims in SB08, examining model runs from AR4 as well.

Murphy and Forster (2010) argued against the issue of substantial bias brought up in SB08. To understand this argument, it is necessary to include the full linear model of Earth’s energy balance:

C(dT/dt) = N – ΔT*λ + S.

Here, C is the heat capacity of the mixed-layer relevant to the time period, N includes all radiative forcing (so the f + X terms above), while S is the non-radiative changes forcing temperature (e.g. heat transport from deep ocean into the mixed layer). Now, if the unknown radiative forcing portion of N is small relative to S, then it will also be small relative to dT, since T is forced by N + S. Under that scenario, the biases in the regression of (f – TOA_flux) against T will be tiny. Thus, MF2010 argues that the mixed layer used in SB08 is too small, and C should be larger. The effect is that since we know the magnitude of dT from measurements (and don’t directly know the S term), it implies that C(dT/dt) is large relative to the N term, which must be made up with by the S term. This would then suggest S >> N, and the bias would not be a concern.  However, this diagnosis of C suffers from a similar problem as Dessler 2011, which I’ll discuss later.

I’ve included Chung, Soden, and Sohn (2010), despite it mostly being ancillary and a reaction to Lindzen and Choi (2009), because I think they include an interesting figure (3). Basically, the figure shows that there is at least some correlation between the interannual damping radiative response in the models and climate sensitivity.

Dessler (2010), which is solely about the cloud feedback, does not directly relate to the debate, but Dessler brings himself in by making the following claim with reference to SB10: “The recent suggestion that feedback analyses suffer from a cause-and-effect problem does not apply here: The climate variations being analyzed here are primarily driven by ENSO, and there has been no suggestion that ENSO is caused by cloud variations into it.”  This claim is curious, particularly because later Dessler notes that “Obviously, the correlation between R_cloud and T_s is weak (r^2=2%), meaning that factors other than T_s are important in regulating R_cloud.”   Basically, Dessler is acknowledging that there is a change in cloud radiative forcing going on here that is NOT a response to the temperature change, which is basically the definition of the N term in SB08 and SB10.  The question remains, however, whether N is sufficient in strength to bias the results.

Lindzen and Choi (2011) attempts to find the climate response signal by avoiding the issues raised by Spencer and Braswell.  I will need to read this over more to understand exactly the process, particularly since it seems to use so much different terminology than the other papers.   One interesting method, originally discussed in LC09 and extended here, is to examine the periods with the largest change in T;  these would be the periods where the known forcing and feedback response would dominate the measured TOA flux, and there would be the least “noise” from the unknown radiative forcing (N) during these periods.

Spencer and Braswell (2011), the paper that started quite a firestorm, moves forward with the SB08 and SB10 argument, and notes that the unknown radiative forcing (N) WOULD bias the Dessler 2010 regression.  They argue that the time-lagged signature gives evidence of a large, unknown radiative forcing term (N), and note that climate models don’t match it, suggesting they overestimate climate sensitivity.

Dessler (2011) makes two arguments in response to SB11.  The second is that models which better simulate ENSO variation yield results more closely to observations in the lagged relationship plots, which seems to be a strong argument against using the lagged relationship as an estimate of sensitivity.  The first argument is that S>>N based on the Earth’s observed energy balance, similar to the MF2010 argument above, and that under this scenario the bias is tiny.  As I show in my posts below, the problem with the MF10 and Dessler11 method is that they assume too deep of a mixed layer (and hence too high a value for C) for the time scale relevant.  Furthermore, noise in monthly T is included in the estimate of the magnitude of S, when averaging them at longer periods would be more appropriate.  Now, C(dT/dt) can be better estimated using measurements from the Argo data, but rather than use the values of the mixed layer, Dessler11 reports on fluctuations down to the 750 meter layer (which does not directly force surface temperature changes).  Correcting for that strongly influences the respective ratio of S:N.

So, where does the debate currently stand?  In part, I have included the resources below to help people come to their own conclusions.  In my mind, the original argument put forth in Spencer and Braswell (2008) regarding the unknown radiative forcing (N) is a strong one, and both Dessler (2011) and Murphy and Foster (2010) make some serious mistakes (mainly regarding the relevant heat capacity) that fail to adequately show that the effect of N is minimal.  Neither did the SB11 work seem to give evidence on its own of a large influence from N.  I do find it interesting that the Pinatubo period, during which the known forcing and feedback response would be greatest relative to the internal radiative forcing (N), is the period which yields the largest feedback response.  This would seem to lead some credence to the idea that N is playing a role in other estimates, but it is far too short a period to yield conclusive results.

There are philosophical reasons to question whether we can diagnose climate sensitivity this way (despite CSS10, are radiative responses to ENSO really indicative of long-term responses? is the climate feedback a constant?) as well as other methodological issues (OLS vs. TLS regressions, response timing offsets between surface and atmospheric temperatures), so I’m curious to see how this whole thing plays out.

Journal Publications

Forster and Gregory, 2006: The Climate Sensitivity and Its Components Diagnosed from Earth Radiation Budget Data.

Forster and Taylor, 2006: Climate Forcings and Climate Sensitivities Diagnosed from Coupled Climate Model Integrations.

Spencer and Braswell, 2008: Potential Biases in Feedback Diagnosis from Observational Data: A Simple Model Demonstration. Corresponding blog post.

Murphy et al, 2009: An observationally based energy balance for the Earth since 1950.

Murphy and Forster, 2010: On the accuracy of deriving climate feedback parameters from correlations between surface temperature and outgoing radiation

Spencer and Braswell, 2010: On the diagnosis of radiative feedback in the presence of unknown radiative forcing. Corresponding blog post.

Chung, Soden, and Sohn, 2010: Revisiting the determination of climate sensitivity from relationships between surface temperature and radiative fluxes.

Dessler, 2010: A Determination of the Cloud Feedback from Climate Variations over the Past Decade

Lindzen and Choi, 2011: On the Observational Determination of Climate Sensitivity and Its Implications.

Spencer and Braswell, 2011: On the Misdiagnosis of surface temperature feedbacks from variations in Earth’s radiant energy balance.

Dessler 2011:  Cloud variations and the Earth’s energy budget.

My Blog Posts – my first post on the topic, attempting to resolve the differences in Spencer vs. Forster and Gregory estimates in climate sensitivity during the Pinatubo period., – both of these posts cover an issue with the method not currently in the peer-reviewed literature, which is that the timing offset between sea surface and atmospheric temperatures will lead to an underestimate of the feedback response, although SB10 does seem to note the stronger feedback response with tropospheric temperatures. – covers S&B2011.   Their point about an unknown radiative forcing causing an underestimate of the feedback response remains, but using the lagged relationships as a proxy for sensitivity is not as convincing., – these discuss why the Dessler 2011 (and MF2010) attempts to dismiss the effect of the unknown radiative forcing is flawed, particularly because he overestimates the natural “ENSO” forcing by using an inappropriate mixed layer depth. – primarily examines the issues of incorrect feedback diagnosis in control runs of models, sampling errors from 10 year periods, and OLS regression attenuation.  Also a brief summary of other problems with the method. – a closer look at possibly resolving the  regression attenuation in the method due to uncertainties in surface temperature measurements by using TLS, and the implications.

Other Blog Posts: – a good introduction to the topic, as well as a foray into the debate over unknown radiative forcing (N). – interesting take on the debate from Dr. Held. – Dr. Spencer’s response to MF2010. – somewhat tangential, but Nic Lewis covers the importance of the FG06 method, as well as the robust regression technique employed.



  1. […] may have noticed my new page, which can be accessed on the right.  I created this because I’ve been posting often lately […]

    Pingback by New page on the observation-based estimate of climate sensitivity « Troy's Scratchpad — November 14, 2011 @ 9:02 am

  2. […] to measure climate sensitivity and where the Spencer and Braswell arguments fit in, please see this page .  But as a quick summary, I’ll note that in Forster and Gregory (2006), the authors comment […]

    Pingback by Foster and Rahmstorf 2011 lends support to…Spencer and Braswell? « Troy's Scratchpad — December 12, 2011 @ 6:35 pm

  3. The following three points should trouble you regarding the whole concept of sensitivity of climate to carbon dioxide …

    (a) Predictions of the most warming supposedly due to carbon dioxide are in the Arctic. Yet historical records at Jan Mayen Island (latitude 70.9N – within the Arctic circle) show absolutely no warming since 1930 and in fact show warmer temperatures in the 1930’s.

    (b) Prof. Claes Johnson has published a detailed mathematical proof that back radiation (having frequencies less than or equal to the original upward radiation from the surface) cannot “overcome” that radiation and actually get absorbed and converted to extra thermal energy. Only radiation with frequencies above the cut off (as determined by Wien’s Displacement Law) can be converted to thermal energy, as is the case with UV and visible spectrum radiation from the Sun.

    (c) Curved trend lines shown in plots by both Trenberth and Spencer clearly indicate that a maximum has been passed and a decline commenced.

    There are links and further details on this at my site

    Comment by Doug Cotton — December 28, 2011 @ 7:39 pm

  4. […] .  The model they develop is only a bit more complicated than the simple energy balance model we’ve previously discussed , as they use the current TOA radiative imbalance, along with the surface temperature change since […]

    Pingback by Another observationally-based estimate for climate sensitivity… « Troy's Scratchpad — February 16, 2012 @ 12:39 pm

  5. […] charts galore.  I’ve been looking at this as it folds into the ongoing discussion of inferring climate sensitivity from the radiation budget, although I don’t plan on putting much analysis into this post.  It is more for visual […]

    Pingback by Temperature and Radiative Flux Evolution with ENSO « Troy's Scratchpad — April 30, 2012 @ 3:25 pm

  6. […] and "feedback" in current climate lexicon.  For more information you can see my page here,  although I think parts of that may out-of-date enough to not totally reflect my evolving […]

    Pingback by Estimating Sensitivity from 0-2000m OHC and Surface Temperatures « Troy's Scratchpad — October 19, 2012 @ 9:01 pm

  7. Tnak you very much for this blog.

    Comment by Tibor Skardanelli — October 25, 2012 @ 4:05 am

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at

%d bloggers like this: