A detailed analysis is presented in order to determine the sensitivity of the estimated short-term cloud feedback to choices of temperature datasets, sources of top-of-atmosphere (TOA) clear-sky radiative flux data, and temporal averaging. It is shown that the results of a previous analysis, which suggested a likely positive value for the short-term cloud feedback, depended upon combining all-sky radiative fluxes from NASA’s Clouds and Earth’s Radiant Energy System (CERES) with reanalysis clear-sky forecast fluxes when determining the cloud radiative forcing (CRF). These results are contradicted when ΔCRF is derived using both all-sky and clear-sky measurements from CERES over the same period. The differences between the radiative flux data sources are thus explored, along with the potential problems in each. The largest discrepancy is found when including the first two years (2000–2002), and the diagnosed cloud feedback from each method is sensitive to the time period over which the regressions are run. Overall, there is little correlation between the changes in the ΔCRF and surface temperatures on these timescales, suggesting that the net effect of clouds varies during this time period quite apart from global temperature changes. Given the large uncertainties generated from this method, the limited data over this period are insufficient to rule out either the positive feedback present in most climate models or a strong negative cloud feedback.
As you may recall, this primarily grew out of my frustration with the Dessler (2010) Science paper, which concluded that the shortwave, longwave, and total components of the short-term cloud feedback were all likely positive, as I found that if you didn’t substitute in the reanalysis values for clear-sky (and just use the same CERES flux source), this resulted in an estimated negative cloud feedback. I did a guest post on this at Lucia’s over a year ago, and Steve McIntyre posted on this as well. Moreover, the reference given in Dessler (2010) as the reason for avoiding CERES clear-sky discusses the absolute OLR bias, which would not directly affect the SW component in any significant way, and other results suggested this would not affect interannual anomalies (Allan et al., 2003) either. My paper is basically a sensitivity test, highlighting the sensitivities to clear-sky flux, the time period used, as well the temperature dataset chosen.
Ultimately, I think subsequent research into the topic has revealed a variety of issues in all data sets (as reflected in the paper), and the introduction of the global EBAF dataset in the interim has made things more interesting. I believe the paper has improved in the open review process from the referee comments, including those from Dr. Dessler who served as non-anonymous reviewer, despite the fact that I think he ultimately disagrees with some of my conclusions.
So, what does the paper show? Basically, as you can see from the abstract, I don’t believe that regressing global temperature anomalies induced by ENSO variations against the cloud forcing in this manner tells us much of interest with respect to the "cloud feedback" one may expect in relation to a doubling of CO2. It certainly doesn’t in models. This is the reason for such sensitivity to various methodological choices. With ENSO, you essentially have a progression of when the warming occurs at different areas, so that the estimate for the cloud feedback varies wildly at different time lead/lags or using ocean vs. SAT vs. lower tropospheric temperatures. (Some related discussion of modeled cloud biases during ENSO vs. long-term can be found in Sun et al., 2009)
The paper has already gotten some publicity from James Annan, although I haven’t really done any other promoting given that I don’t think the results are particularly groundbreaking, despite it being “controversial” for criticizing an earlier paper. One thing I learned – I don’t have much desire to be lead or sole author on another “controversial” paper anytime soon…it seems quite exhausting for a hobby!