Troy's Scratchpad

September 12, 2011

Thoughts on Spencer and Braswell 2011

Filed under: Uncategorized — troyca @ 6:08 pm

I’m a bit late to the party, but I’ve now had an opportunity to look over the Spencer and Braswell 2011 paper, along with the criticisms from Real Climate.  I’d like to keep this technical amid all the controversy, and clearly I have some catching up to do with the Dessler 2011 paper (along with Spencer’s responses on his blog), which I’d like to look at in a different post.  The script I’ve used to reproduce the model in the paper is available here.     


Basic Arguments:

From my reading of SB11, there seem to be three main points:

1) Using a simple model, it can be shown that unknown radiative forcings lead to underestimates of the climate sensitivity.

2) The lagged signatures of the observations don’t match up well with the lagged signatures of the global climate models.

3) The lagged signature of the simple model with radiative forcings matches up well with the lagged signature of the observations, suggesting that there are significant radiative forcings over the period. 

The main technical criticisms laid out at RealClimate seem to be:

1) The differences between the models and observations in SB11 result from a combination of dataset choices (CERES SSF and hadCRUT), the choice of models to compare against, and noise.

2) The SB11 model is too simple, excluding ENSO, which may in itself be the reason for the lagged signature without needing to invoke radiative forcings, and the SB11 match with observations may be the result of tuning.

3) The ENSO variations are not the result of cloud forcings, and so the SB2010 argument #1 is moot.


My Take

To me, Spencer’s model makes sense for illustrating his simple first point (that unknown cloud forcings could cause a misdiagnosis of the climate sensitivity).  My reproduction of his first figure using monthly steps is available here:


I don’t believe this point is in dispute — the only dispute is whether it is actually relevant here (that is, whether there exist unknown radiative forcings over the period).

For SB’s point number two, things are not nearly as clear-cut.  I find the differences interesting, but as we’ll see later, I think figure 3 (shown below) actually undermines SB’s later point.  And as the Trenberth and Fusullo post at Real Climate points out, it is important to see to what degree the differences are the result of dataset and GCM choices.  On the other hand, adding the error bounds the way that TF does could also be misleading…what we essentially care about here is the amplitude of the variations in the lag regression plots, and so if all of the model runs have essentially the same shape but are merely shifted up or down the y-axis, it gives the impression of more uncertainty regarding the "shape" than actually exists.  The bottom plot in the TF post, which uses all models, shows so much uncertainty that it is hard to find observations that wouldn’t fit inside it.


The weakest part of the SB10 paper to me is point number three.  Yes, the simple model matches up well with observations, as they highlight in figure 4, and which I have attempted to reproduce below:


However, note the shape of the non-radiative forcings line, and then compare it to those lines of the climate models from SB figure 3 above.

To make the case that the climate models are off because they assume variations are not radiatively forced (in the 21st century), you would expect the climate model lines to look like the non radiatively forced line in figure 3.  Of course, the plot also uses runs from the 20th century, which includes periods where the GCM variations were largely radiatively forced, so the result is somewhat muddled.  Nonetheless, because the GCMs don’t match the non-radiative forcing line, and because they have a shape that is at least somewhat more similar to the observations than that line, it suggests that the simple model  might be missing something in this lead-lag relationship that IS present in the GCMs. That brings it to the main issue that others have raised — are there any other physically reasonable models (e.g. those that include ENSO) that can reproduce the observations?  My understanding is that Dessler 2011 focuses on whether GCMs are up to the task.

Now, with respect to the SB model and tuning, the equation seems pretty straight-forward, but with 3 major choices: 1) the depth of the ocean mixed layer, which determines heat capacity, 2) the choice of lambda (the feedback response), and 3) the choice of noise models for both the radiatively and non-radiative forcings.  The figure below shows the 70% radiatively forcing, but with lambda and the ocean layer depth changed:


At first glance the tuning doesn’t seem to be much of an issue.  The choice of ocean layer depth does not strongly affect the amplitude, whereas the sensitivity does…which is the point of the exercise.  SB are trying to show that those models with high sensitivity (lambda=1) yield the flatter lines in the regression, whereas those with low sensitivity (lambda=3) get more amplitude, and more in synch with the observations.  This perhaps explains why SB stratified GCMs in terms of climate sensitivity in their lag regression chart.  However, as shown in the TF post, the GCMs don’t necessarily break down that way. 

If the "simple model" with no radiative forcings could simulate the lagged signature of GCMs in the early 21st century, I think SB11 would have a much stronger case.  As it is, the evidence presented in the paper leaves one wondering if there are other models with non-radiatively forced ENSO variations that can also explain the lagged signature. 

However, just because I don’t find SB point #3 conclusive does not mean I necessarily agree with TF’s point #3.  That clouds respond to ENSO does not mean they can’t result in the type of misdiagnosis that SB refers to, as I explained in a comment at the Air Vent:

Consider the hypothetical scenario where winds associated with El Nino blow clouds in a specific region from over an area of low surface albedo to one of high surface albedo, thus creating a downward positive flux anomaly of X over what we would expect otherwise. Now, at the same time a global temperature increase, dT, has occured due to El Nino, causing a radiative feedback response of Y.

What we are trying to determine is the radiative response to the temperature increase, or Y/dT. But we only have the TOA flux measurement, which, since X and Y are in opposite directions, will be equal to TOA = Y – X. So obviously the (Y-X)/dT will be an underestimate, depending on the size of X.

Now, you can label X whatever you want, as it may be driven by ENSO. But X is not a response to the temperature increase of ENSO (which is what we care about WRT CO2 sensitivity), and causes an underestimate of the actual response to temperature if it is unknown or ignored.

This point seems to be missed amid all the arguments over the definitions of "forcing" or "feedback".  And yet, if this “raditively forced” portion from clouds is indeed tiny, it won’t make much of a difference.  From what I gather, Dessler 2011 dives deeper into the particular analysis of what percentage of the variation is radiatively forced.  I’m looking forward to seeing what comes out there, hopefully in another post.

As always, any comments explaining what I have wrong here are appreciated.


  1. […] I’ve already looked at SB11 in a previous post, now I’ll turn to Dessler 2011, also including the critique Dr. Spencer put up on his blog.  […]

    Pingback by Thoughts on Dessler 2011 « Troy's Scratchpad — September 16, 2011 @ 11:27 am

  2. “To make the case that the climate models are off because they assume variations are not radiatively forced (in the 21st century), you would expect the climate model lines to look like the non radiatively forced line in figure 3.”

    My understanding, Troy, is that Roy has actually been saying that forced variations occur in some (all?) of the models, too. An important question, IMHO, is do the relationships at zero lag not give the correct sensitivities of models, either? I am pretty sure they do not.

    Comment by timetochooseagain — September 16, 2011 @ 2:44 pm

    • TTCA,

      “My understanding, Troy, is that Roy has actually been saying that forced variations occur in some (all?) of the models, too.”

      From Dessler’s 2011 figure 2, he mentions that “The black lines are from 13 fully coupled pre-industrial control runs”, so I’m assuming by “control runs” he means those without the man-made and cloud forcings. So it seems to be primarily a matter of how much those volcanic forcings might affect it…not sure if those are included.

      “An important question, IMHO, is do the relationships at zero lag not give the correct sensitivities of models, either? I am pretty sure they do not.”

      Again looking at Dessler’s figure 2, I’m inclined to agree with you. It seems odd that among everything else, this point seems to be lost.

      Comment by troyca — September 16, 2011 @ 3:04 pm

      • A “control run” usually means that external forcings have not been imposed. I wouldn’t say that this means their temperature variations over the short term can’t be the result of radiative fluctuations: of course, those variations would have to arise from chaotic weather variations, not being caused by anything specific. Maybe models don’t have those kinds of fluctuations, maybe they do, I don’t know. I looks to me like they do, though. The timescale of radiative fluctuations in climate models must almost certainly be shorter than several decades, because they don’t have spontaneous climate changes without externally imposed forcing over periods of decades (anymore anyway, the solved that “problem” with recent generations of models, it was called “drift” and required “flux adjustments” to eliminate).

        Comment by timetochooseagain — September 16, 2011 @ 3:21 pm

  3. TTCA,

    The difference, I think, is that the chaotic weather variations are not radiatively forced from a TOA flux (but rather would be included in Spencer’s “non-radiative” forcing term). If they DO include the radiative forcings from cloud changes, I don’t think Spencer would be criticizing the models on that account.

    Comment by troyca — September 19, 2011 @ 9:39 am

    • The thing is, I’m saying the models might have high frequency chaotic forcing. I don’t think Roy has said they don’t, (maybe that their level of such was not realistic) indeed in his earlier papers he noted signatures of radiatively forced and non-radiative variations occurring in models: the main criticism, as I understand it, is two fold:

      1. That the models show slopes during the “non radiatively forced, feedback striations” much lower than appear to occur in nature.

      2. That the timescales for the radiatively forced components that arise chaotically are all arbitrarily short, so that natural trends cannot arise chaotically due to such variations-in models-but this might not be the case in nature.

      It appears-to me-that models to include some level of radiatively forced “weather” with short timescales. For instance, here are some examples of the relationships between model’s fluxes and their temperatures:

      Note that the dotted lines represent the true feedbacks for those models.

      Now, in SB10, in JGR, they found that in their simple model, this pattern of striations and loops arises from combination of forced and unforced variability, but note that the shape of the forcing curve matters to what kind of variations occur in the phase space plots. They found that a linearly increasing forcing would lead to a sort of inverted square root shape, no related to the feedback slope, if such a forcing is not removed from the fluxes first. The loop shapes arise from chaotic (or as they put it “random” variations) in forcing. That this shape exists in models suggests that, in models, some chaotic cloud or other radiative forcing does occur. The level/timescale of such may not be realistic, however.

      Comment by timetochooseagain — September 19, 2011 @ 12:34 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at

%d bloggers like this: