Troy's Scratchpad

December 21, 2011

Measuring Sensitivity with the GFDL-CM 2.1 Control Run, Part 2

Filed under: Uncategorized — troyca @ 6:43 pm

In part one, I noted a number of potential issues with using the Forster and Gregory (2006) method over a period dominated by ENSO activity to diagnose climate sensitivity.  And yet, as shown in that post, using the method on GFDL CM2.1 seemed to yield fairly accurate results, even slightly underestimating climate sensitivity (it appears to do the same with ECHAM-MPI as well).  So, if there are indeed issues that can lead to overestimates of the sensitivity (as it does in 12 of the other models), why doesn’t it overestimate sensitivity in these other two models?  

One obvious answer would be that some of these potential sensitivity-inflating issues are not present in the models, such as errors in the independent variable (i.e. temperature measurements) and unknown radiative forcings (the Spencer and Braswell argument).  But that still leaves other issues, such as the timing offset between atmospheric and sea surface temperature changes (and hence the measured temperature radiative response being off with respect to surface temperature changes), along with the large difference between short-term and long-term cloud feedbacks in the models, both of which ARE included in the GFDL CM2.1 and ECHAM-MPI models. 

To investigate, I used the Soden GFDL 2.1 kernels, along with the last 100 years of the GFDL CM2.1 pre-industrial control run from the PCMDI archive,  to separate out the radiative responses by climate component (water vapor, surface albedo, and temperature), then ran regressions against combined tos (sea surface temperature) + land tas (2 meter air temperature) for 11 year periods (roughly matching the 2000 – 2010 CERES data we have) to see what the method would yield for these instantaneous feedbacks.  As the actual sensitivity and feedbacks per doubling of CO2 for this model is known, we can compare these to the "instantaneous" feedbacks as diagnosed by the FG06 method.  The long-term feedbacks for GFDL CM2.1 are from Soden and Held (2006).  The remaining feedbacks come from the median of the estimators from my different regression periods in the control experiment. Units are in W/m^2/K.

 

  12 mo avg 3 mo avg 1 mo avg Long Term
Overall -1.78 -1.76 -1.58 -1.37
Temperature (Planck + Lapse Rate) -3.72 -3.75 -3.81 -4.36
Water Vapor 1.76 1.77 1.82 1.97
Surface Albedo 0.21 0.21 0.20 0.21
Cloud 0.53 0.53 0.59 0.81
Residual -0.56 -0.55 -0.38 ?

Please note that this does not imply the GFDL CM2.1 has a negative net climate feedback by the typical definition, since I am including the Planck response in the overall feedback presented. 

Anyhow, the temperature, water vapor, surface albedo, and cloud flux contributions determined using this technique  seem to explain about 97.5% of the variance in TOA radiative fluxes in the GFDL model over this period of the control run.  Unfortunately, the variance in the residuals seems to be fairly well correlated with temperature (r^2 = 20%, higher than the cloud and surface albedo portions), so that there seems to be a substantial leftover instantaneous response  apart from these other feedbacks.  The “Residual” row listed in the table above is merely the difference between the “Overall” diagnosed feedback (from overall flux anomalies) and that of the sum of the individual feedbacks (using the kernel technique).  To ensure that this is not merely a statistical artifact, I also regressed the residual fluxes after removing the various climate components against temperature, yielding values near those in the “residual” row.

Nonetheless, this may go a bit towards solving a couple of those mysteries.  The fairly large underestimate of the temperature response ( ~ 0.5 to 0.6 W/m^2/K) is very likely be the result of the timing offset / atmospheric temperature lag time previously discussed, and this can have serious consequences on estimated sensitivity (the difference, for example, between 3K and 2.15 K sensitivity).  However, we don’t see an overestimate of sensitivity because a) there appears to be a unique short-term response going on here, and b) the underestimate of the cloud feedback, which is significantly more positive in the long-term for this model than it is in the short-term.  From Dessler (2010), we see that  only one other model significantly underestimates the positive cloud feedback in the short-term: ECHAM-MPI.  Part (b) leads me to consider that the reason the FG06 method does NOT overestimate the sensitivity in these two particular models is because of this relationship between short-term and long-term cloud feedbacks.  It’s worth noting that Dessler (2010) calculated an even smaller short-term cloud feedback from GFDL CM2.1 than here…I use a different part of the control run, but otherwise I’m not sure how to explain the difference.

The water vapor calculation is pretty close, although perhaps a bit underestimated using the instantaneous method.  Surprisingly, the short-term albedo and long-term albedo estimates are about the same.  I think this is surprising since typically the albedo feedbacks are considered slower feedbacks that won’t fully manifest in the short-term.  Finally, one may notice that the GFDL CM2.1 overall feedback of –1.37 W/m^2/K from Soden and Held (2006), if converted to ECS in a typical manner (3.8 W/m^2 / 1.37 W/m^2/K = 2.78 K) does not correspond to the published ECS of that model (3.4 K).  You get closer if you use the 4.3 W/m^2 TOA forcing described in Soden and Held (2006) for a doubling of CO2 instead of 3.8, but it still does not seem to explain how CM2.1 can have a significantly higher radiative response to surface temperature changes while also having a higher sensitivity to a CO2 doubling than CM2.0.  Unless the estimated CO2 forcings are that much different?  This is why I have left a “?” in the residual row for the long-term column.

Regardless, I will be investigating this method in some other control runs, along with more periods in the GFDL control run.  From these results alone, however, my tentative conclusion would be that using the radiative fluxes over a period similar to 2000-2010 to measure climate sensitivity, in the absence of errors in the regressors and no noise due to unknown radiative forcings, would lead to:

1) Likely underestimates of the temperature response (due to the timing offset)

2) Inaccuracy in the cloud feedback, although the direction is unknown (the models are split on this, at least according to Dessler 2010).

3) Some “residual” response component, whose magnitude and sign is unknown with respect to the different models

Of course, testing the method on more models may change things.  That’s a lot left to do.

Code and data for this post available here.

December 12, 2011

Foster and Rahmstorf 2011 lends support to…Spencer and Braswell?

Filed under: Uncategorized — troyca @ 6:24 pm

A new paper is out by Foster and Rahmstorf (2011), and while I may later do a more in-depth analysis, I want to point out a rather interesting implication of this paper, if indeed one were to take it at face value — it supports Spencer and Braswell (2008, 2010, and 2011 to some degree).  Allow me to explain.  (Note: to avoid confusion, there is Grant Foster, a.k.a. Tamino, and Piers Forster, whose papers I reference below attempt to measure sensitivity from radiation fluxes).

As you may recall, I performed some sensitivity tests related to the multiple regression a while back .  Looking that post over again, there are a few errors on my part (I believe I used actual surface T for the S-B/Planck response), but there are a few interesting tidbits: 1) leaving the adjustments for TSI/solar out affects the conclusions, and 2) the estimated solar response is around 4 times greater than the volcanic response.

Let’s take a closer look at #2, which may have changed a bit from the post to the paper.  From figure #3 of the FR11 paper, we see the coefficient for TSI at around 0.1 C for the land data, which, after adjusting for planetary albedo  and shadow area / surface area, results in around a 0.57 C/(W/m^2) instantaneous surface temperature response for the actual solar forcing.  Note that in Tamino’s original post, he had estimated about 0.39 C/(W/m^2) for solar, but that was when the solar influence range was only 0.08 C rather than the 0.12 C mentioned in the new paper.

[Aside]

For Aerosol Optical Depth (volcanic), the coefficient is around 2 deg.c / tau.  If we look up the approximate efficacy, we see that it is around -25 W/m^2/tau.  Such an estimate would yield the instantaneous sensitivity of around 0.08 C/(W/m^2), which would put it at around 1/7 the efficacy of a solar forcing, both in W/m^2.  Certainly, there are reasons to believe that the instantaneous surface temperature response to the larger forcings may be damped (thank you SteveF) by the ocean heat uptake, but it seems that a factor of 7x (or 4 times) remains far too big of a discrepancy to be considered a reasonable physical result.  Furthermore, the longer-term response may be expected to manifest itself over the course of say 8-12 years, but for the FR11 paper anything beyond the instantaneous response is ignored.

[End Aside]

Anyhow, according to FR11, the time between the solar forcing anomaly to the surface temperature response is estimated to be around 1 month. Remember that for later.

Relation to Spencer and Braswell

For more on the background of attempting to measure climate sensitivity and where the Spencer and Braswell arguments fit in, please see this page .  But as a quick summary, I’ll note that in Forster and Gregory (2006), the authors comment (my insert in bold):

The X terms [radiative noise or unknown radiative forcings] are likely to contaminate the result for short datasets, but provided the X terms are uncorrelated to Ts, the regression should give the correct value for Y, if the dataset is long enough.

Spencer and Braswell argue that the unknown radiative forcing (fluctuations in cloud cover, which we know to exist AT LEAST on short timescales, per Dessler (2010)) would necessarily influence the Ts and hence lead to an underestimate of the radiative response.  The counter-argument has been two-fold:

The decorrelation time of this radiative noise is shorter than the surface temperature response time.  From Murphy et al. (2009), we read:

If temperature variations are changing outgoing radiation then temperature should be the independent variable whereas if radiation variations are affecting temperature then temperature should be the dependent variable. Although both are true to some extent, they can be partially separated by time response: outgoing radiation changes are mostly immediate whereas surface temperatures lag radiative forcing. Autocorrelation analyses of global temperatures suggest that the surface ocean portion of the Earth’s climate response has a time constant of about 8–12 years [Scafetta, 2008; Schwartz, 2008].

I believe this response misses the mark, as you might very well expect significant surface temperature responses to forcings on much shorter time-scales, even if the full forcing response is not realized for several more years.  A better argument might be that the decorrelation time of this noise used by SB is too long, and that for cloud fluctuations it is actually on the scale of 2-3 months, whereas the temperature response is (for example) about 5 months later.  However, the Forster and Rahmstorf (2011) paper implies a lag in temperature response of only around 1 month for these smaller fluctuations, which, even with only intraseasonal fluctuations (such as the Madden–Julian oscillation) in cloud cover, would suggest a strong correlation between these unknown radiative fluctuations (X) and T_surface!

The second major argument against the Spencer and Braswell result, as advanced by Murphy and Forster (2010) and Dessler (2011), is that the effective heat capacity of the ocean on these timescales is too high for the unknown radiative forcing to have any significant effect on surface temperatures.  They attribute almost all of the surface temperature fluctuations during this recent decade to internal, non-radiative forcings (e.g., heat exchange between the ocean layers).  I have explained before why their estimates of heat capacity are inappropriate for these monthly timescales.  Nonetheless, using a similar method to Dessler (2011), I’ll point out that the standard deviation in surface temperature anomaly from 2000-2010 is around 0.1 C.  Dessler (2011) calculates the standard deviation of the cloud forcing/noise to be around 0.5 W/m^2.  So, can this 0.5 W/m^2 cloud fluctuation force any significant amount of the 0.1 C surface temperature changes?  According to Dessler (2011), the answer is a strong NO (~5%).  But according to Foster and Rahmstorf (2011), with its 0.57 C/(W/m^2) instantaneous response to solar forcing, such cloud forcing fluctuations (if the response scales) could result in 0.28 C changes!  Now one may argue that the responses to slower solar cycles don’t experience the same damping, but even if the cloud forcing efficacy is only 1/5 that amount, this would imply that the measurements of climate sensitivity from radiative fluxes has been greatly overestimated.

Overall, I don’t think a proper analysis will support either the high Dessler (2011) heat capacity over these short period, or the high instantaneous effect of changes in TSI from FR11 that contradicts it.  Indeed, I suspect the latter is likely an artifact of fitting to an underlying linear trend, as the effect of the solar minimum is overestimated in order to counter the flattening in the early 21st century.  I think this point highlights the larger problems with such a methodology.  Nonetheless, if one were to take the FR11 results at face value, Spencer and Braswell could very well point out that this peer-reviewed paper suggests a short lag time for the large surface temperature response (1 month) to a small forcing, lending credence to their argument that unknown radiative forcing “noise” will correlate with surface temperature.  Heck, even using a T_s response midway between the FR11 values for TSI and AOT per W/m^2 would strongly support the SB case.

December 1, 2011

Katsman and van Oldenborgh 2011 Update

Filed under: Uncategorized — troyca @ 8:32 am

Two months ago I posted on a couple issues I had with Katsman and van Oldenborgh 2011: 1) They assumed overlapping 8-year trends were independent when calculating the likelihood of a single 8-yr negative or 0 trend occurring in the upper ocean heat content over 30 years, and 2) The ENSO observations did not support the theory developed by the model that El Nino was the cause of the extra radiation escape.

Thanks to commenter Howard on that thread, who pointed out that a correction has been published soon after (readable draft version), I am pleased to see that issue #1 was addressed.  The probability of an 8-yr negative trend (according to the ECHAM-MPI model) during that 30 year period starting in 1990 has been appropriately reduced from 57% to 25-30%, and the probability of a 9-yr negative trend has been reduced from 48% to 5-15%.  However, it appears the 9-year trend will now likely be positive, given the recent uptick in UOHC.   I would love to flatter myself and think I had something to do with the correction, EXCEPT that the draft was received by GRL on September 29th, a couple weeks before my post.

Finally, KO (2011) mention the following:

The computational error has no impact at all on the analysis in the remainder of the paper, from which we concluded that such a period without upper ocean warming is explained by increased radiation to space, largely as a result of El Nino variability on decadal timescales, and by increased ocean warming at larger depths, partly due to a decrease in the strength of the Atlantic meridional overturning circulation.

Bold mine.  As I pointed out in that previous post and in the 2nd issue above, their model suggests that 8-yr average El Nino conditions with a four year lead would explain part of the negative trend, but the ENSO index was actually negative in that four year lead to the current flattening.  It may be that El Nino variability is the cause, but if that’s the case then the ECHAM-MPI model would seem to have the radiative response to El Nino wrong (at least with respect to the time lag).  This, I think, would affect the conclusions.

Blog at WordPress.com.