Troy's Scratchpad

September 21, 2011

CERES and ERA-Interim fluxes in Dessler 2010

Filed under: Uncategorized — troyca @ 7:07 am

As I noted in a previous blog post, and as mentioned at Climate Audit, the positive cloud feedback results (and excluding large negative feedbacks) of Dessler 2010 (not to be confused with Dessler 2011, which was the topic of my latest post) largely depend on using ECMWF ERA-Interim reanalysis for the clear-sky fluxes rather than CERES.  The Cloud Radiative Forcing is determined by the difference between all-sky and clear-sky fluxes, and rather than using the CERES data for both clear-sky and all-sky, Dessler10 combines CERES all-sky with  ERA clear-sky.  The reason given for doing this is a suggested bias in the CERES clear-sky, with a reference to the Sohn And Bennartz 2008 paper.

In that first post, and in my guest post at Lucia’s, I pointed out two things:

1) The largest amount of this difference seems to come from the shortwave fluxes, which are unaffected by this longwave water vapor bias, and

2) Even if the OLR dry-sky bias was present, it would not account for the differences we see here:

But I’m doubtful that the LW result should be discounted based on measurement bias anyhow. For one, the SB08 paper refers to bias in the absolute calculation of CRF, not necessarily to the change in CRF, and the effect is minimal there (around 10% of only the OLR). Second, the bias should affect it in the opposite direction – it would make the cloud feedback appear more positive, not negative. From the SB08 paper, they mention: “As expected, OLR fluxes determined from clear-sky WVP are always higher than those from the OLR with all-sky OLR (except for the cold oceanic regions) because of drier conditions over most of the analysis domain.” Obviously, clear-sky days don’t prevent as much OLR from leaving the planet as cloudy days, and SB08 estimates that about 10% of this effect is from water vapor instead of all of it being from clouds. So, warmer temperatures should increase water vapor, which will be more prevalent on the cloudy days vs. the clear sky days, which in turn will make it appear that clouds are responsible for trapping more OLR than they actually do. In other words, the bias includes some of the positive feedback due to water vapor – which is already counted elsewhere – in the estimation of cloud feedback.

Now, Nick raised a point over at CA, that perhaps all we’re getting with ERA-Interim clear-sky fluxes is the CERES fluxes, but with this dry-sky bias removed:

What the reanalysis can then do is make the moisture correction so the humidity is representative of the whole atmosphere, not just the clear bits. I don’t know for sure that they do this, but I would expect so, since Dessler says that are using water vapor distributions. Then the reanalysis has a great advantage.

While I don’t believe this could be reconciled with either of the two points above (there is no dry-sky bias to "correct" in the shortwave fluxes, and the correction should be in the opposite direction), I want to give this a fuller treatment that should prove that the positive cloud-feedback result in Dessler10 is largely an artifact of combining the two different datasets.

If our hypothesis is that ERA_CRF is simply CERES_CRF corrected for dry-sky bias, we would be able to detect it rather easily be checking the relationship between the measured water vapor (total precipitable water from CERES) and the CERES_CRF-ERA_CRF differences.  The following scatter plot shows that relationship:

image

As you can see, the "dry-sky bias correction", if it exists in ERA, accounts for very little of the difference we see between ERA_CRF and CERES_CRF.  Please note the following point I made on that CA thread: 

Since we’re calculating CRF as the difference between clear-sky and all-sky fluxes, ANY difference in those two datasets is going to show up in the estimated cloud forcing, including their different estimates of solar insolation (which has nothing to do with clouds).  The magnitude of the changes in flux is far smaller that the magnitude of the total flux, so you would expect using two different datasets to have a lot more noise unrelated to the CRF. Note that if there is ANY flux calculation bias in either of the two datasets unrelated to clear-sky vs. all-sky, it WILL show up in the CRF, whereas if you use the same dataset, even if a flux calculation bias is present, it will NOT show up in CRF unless it is related to clear-sky vs. all-sky.

I’d like to pause a moment and note that the Dessler10 references to the ability of models to calculate clear-sky flux only refer to that of longwave flux.  This is because the shortwave flux is rather trivial, assuming you know the incident solar radiation and albedo:

outgoing_sw_clr = incoming_sw * albedo_clr

Of course, there is no definitive monthly value for albedo and incoming solar radiation, as evidenced by the fact that CERES and ERA have different values for these.  Albedo_clr is primarily just surface_albedo, although there is perhaps some aerosol mixed in.  This next chart shows the effective clear-sky albedo, which can be calculated from outgoing_sw_clr/incoming_sw, but it could just as easily be put in terms of the differences in clear-sky short-wave fluxes:

image

Well, there you have it.  The bulk of these CERES_CRF vs. ERA_CRF differences come from this different value for the effective surface albedo.  Note that this has nothing to do with a "dry-sky" longwave water vapor bias.  The value of 275?  That is approximately incoming_sw * (1 – cloud_albedo), as we’d expect when calculating the flux difference between net and clear-sky with no cloud albedo change.  I repeat, when combing the CERES all-sky flux with the ERA clear-sky flux, your “CRF” change will about equal to 275 * (CERES_surface_albedo-ERA_surface_albedo), even when no cloud properties have changed.  This is what I mean by slight differences from either set showing up as a bias in the CRF estimate when you combine them. 

Does it matter whether CERES surface albedo or ERA surface albedo is more correct?  Not at all, as long as they’re consistent.  You can make the corrections to the CERES short-wave all-sky flux to use the ERA surface albedo, or you can make the corrections to the ERA short-wave flux to use the CERES surface albedo, or you can just use CERES fluxes for both all-sky and clear-sky.  But the important thing to note here is that by combining the CERES all-sky with ERA clear-sky, you get a difference in what is effectively surface albedo bundled in with the CRF term, despite having nothing to do with clouds. 

Using HadCRUT with the CERES_CRF, I get a value of -0.50 W/m^2/K for the cloud feedback.  Using HadCRUT with the ERA_CRF included in Steve’s file from Dessler, I get a value of 0.26 W/m^2/K (This is slightly lower than the Dessler10 paper because additional adjustments are made to go from CRF to R_cloud from radiative kernels, which I’m not disputing here).  Correcting for the different effective surface albedos in the ERA_CRF, I once again get a negative value of -0.34 W/m^2/K.  None of them have much in the way of correlations.

I have previously suggested why we likely see lower correlations here, and how to correct it, and so I consider -0.34 W/m^2/K to be an underestimate of the short-term cloud feedback.  Nonetheless, to me there seems to be little ambiguity that the magnitude of the positive feedback in Dessler10 is more of an artifact of combining two flux calculations that aren’t on the same page, rather than some bias correction in ERA-interim.  I would welcome any comments to the contrary. 

The script I used for this post, along with some intermediate ERA data and the original CERES data, is available here.  To get the ERA-Interim raw data, you’ll need to use the synoptic means, grabbing the data at time steps of 12 hours, with times chosen of 0:00 and 12:00, for the months 2000 through 2010, and selected variables of at least incidental solar radiation and the top net solar radiation for clear-sky.

19 Comments »

  1. Outstanding post Troy. Nick never did deal with your point explaining that the wv bias suggested by SB08 is the opposite of what would be needed to change D10’s scatterplot to a negative slope. Not only have you confirmed that with your analysis here, you have shown that D10 created, rather than avoided, a bias (and fundamental flaw) with his choice of ERA clear sky.

    It is curious how D10 deals with this matter. D10 dismisses the use of the Ceres measured data with a vague reference to the “bias” discussed in SB08. Dessler fails to report this conclusion changing sensitivity which he is obliged to do. As a result, IMO reviewers fail to follow up on this rather obvious red flag.

    I hope you are considering submitting a comment to Science.

    Comment by Layman Lurker — September 21, 2011 @ 9:29 am

    • I was curious about submitting a comment to Science, but it appears they only publish comments that are submitted within 6 months of when the original paper is published…so that time has already run out on Dessler 2010.

      Comment by troyca — September 22, 2011 @ 6:50 am

  2. Great Troy,

    When I finish the two projects Im working on I have an idea for another that could use your help with the cloud data..
    write me @ moshersteve.. at g mail

    Best

    Comment by Steven Mosher — September 21, 2011 @ 9:50 am

  3. […] comment, visit Troy’s blog Written by lucia. Previous Post: […]

    Pingback by The Blackboard » Troy: Comment on Dessler10 — September 21, 2011 @ 10:19 am

  4. […] has another excellent contribution to the continuing analysis of Dessler 2010 and Dessler 2011 (h/t Mosher for alerting […]

    Pingback by Troy: Dessler(2010) “artifact of combining two flux calculations” « Climate Audit — September 21, 2011 @ 11:17 am

  5. […] CERES and ERA-Interim fluxes in Dessler 2010 Filed under: Uncategorized — troyca @ 7:07 am […]

    Pingback by CERES and ERA-Interim fluxes in Dessler 2010 « the Air Vent — September 21, 2011 @ 12:24 pm

  6. Great post Troy.

    Pull it all together and draft a paper. Relevant, timely, and clear.

    Comment by steve fitzpatrick — September 21, 2011 @ 5:17 pm

  7. Those two points on the top right of you plot have a disproportionate effect on the correlation coefficient, its always useful to rank, then plot a number series against rank; if its not sigmoid, its not Gaussian. A top-n-tail. removing top and bottom 2-5 points and looking at you fit allows you to get an idea if an outliner is noise or not.

    Comment by DocMartyn — September 21, 2011 @ 6:01 pm

    • Here is a graph that shows the results with the top two points removed. As you suggest, it has a slight effect on the r^2 value, dropping it from 0.64 to 0.58.

      Comment by troyca — September 22, 2011 @ 7:47 am

  8. Excellent analysis, Troy.

    The one thing I’d criticise is the “slope=-0.19” on your first graph.

    An R^2 of 0.03 does NOT correspond to a low slope. It corresponds to NO SLOPE , ie there is NO linear relationship between the two variables. In such a case you do not even calculate the “slope” nor fit a line. There is NOTHING fit to.

    I don’t think you are suggesting otherwise but this idea of presenting a value of a “slope” needs to be blown out once and for all.

    This is precisely what Dessler and others have been pulling off for years in attempts to show the positive feedback on high climate sensitivity.

    This post would be a good place to clearly make that point , rather than falling into the same error.

    Comment by P.Solar — September 22, 2011 @ 5:36 am

    • Exactly. Please handle this comment about the first graph, Troy.

      In addition, is it possible to have a simple letter published in Science to make the the point that the true correlation is simple the two albedo calculations you demonstrate with your second graph, as opposed to a full paper in rebuttal?

      AFPhys – via Lucia’s

      Comment by AFPhys — September 22, 2011 @ 9:08 am

    • In this case, I agree that there likely no (or tiny) relationship between the two variables. That was the point I was trying to make in posting the chart. I’m not sure I have a problem with Dessler and others presenting a slope for low correlations, provided they include the large uncertainties in the regressions from such correlations. After all, if there are several other factors influencing the results, the correlations are likely to be low even if there is a slight relationship. I think where Dessler erred here was not considering the uncertainty involved in combining the two different fluxes, which would have blown up the confidence interval to include the large negative feedback it purported to rebut, and would have rendered the paper rather meaningless when it came to determining the cloud feedback.

      Comment by troyca — September 22, 2011 @ 9:47 am

      • Thanks for the reply, Troy. However , it appears you may be mistaken as are many others about linear regression in cases like this.

        You can have a small slope with a good correlation but a poor correlation means that there is no discernible relationship to fit to. (At least not a linear one).

        What is worse is that if there is some small linear relation behind the noise, you cannot use OLS to find it because it will be (possibly drastically) reduced by regression attenuation. It can be shown mathematically that the OLS slope will always be less than the true slope if there is significant noise is y. Add to that further reduction by the fact you have significant errors in y AND x and the result is meaningless, in a very real sense.

        Doing OLS on scatter plots is mathematically nonsense and fundamentally wrong, yet seems to be a key method in climate science.

        As a test , invert the axes and see what you get !

        My guess would be a slope of about -8.0 on the same plot. Please post back with what you get.

        Dessler’s earlier paper, dissing Lindzen and Choi, similarly made heavy use of inappropriate OLS fits to contradict their careful work.

        I think this is the best, clearest and most irrefutable way kill off the positive feedback game.

        The rest of your analysis in very enlightening and I congratulate you on your insightful demonstration of how misleading this paper was.

        Comment by P.Solar — September 22, 2011 @ 11:08 am

  9. The uncertainty is not as simple as you may think either. It not just case of putting a grey bit equally on each side. As I indicated above the true value is almost certainly above the ols result . How much , depends on relative magnitude of the noise and on it being decorrelated with the signal and random in nature.

    If we have a good idea of its magnitude ,steps can be made to correct the ols slope. Making that estimate is where it all gets tricky.

    Take a look at this plot:
    http://tinypic.com/r/2s82omq/7

    It’s the kind of lag regression Spencer has been looking at and was made by extending the R script you posted at CA.

    Dessler , looking at this would see instant confirmation of a strong positive climate feedback. The trouble is the linear spike is confounded by the largely out of phase dT/dt response causeing the lagged hump.

    Since it seems that the linear spike is quite narrow it will be almost totally decorrelated by the time the other peak hits its max. That give some hope of treating it as decorrelated noise. We can see the relative magnitudes are comparable.

    That would put the true slope in the range 5 to 6.

    Deriving that rigorously is the challenge.

    Comment by P.Solar — September 22, 2011 @ 11:47 am

    • P. Solar –

      When I switch the axes (your 11:08 comment), I get a slope of -0.1551. I agree that in this case the slope in that first chart is largely meaningless. However, my reasoning for not dismissing a linear relationship between two variables simply because of a low correlation is because I can create a bunch of synthetic data series with a true “slope” of one, add in some normally distributed white noise to the y component, and then find that OLS regression does a pretty decent job of determining the underlying slope most of the time, even despite the low correlations. I’m aware that this is a rather specific case, but in this situation there is no propensity to underestimate the slope…the added noise simply leads to the lower correlation. I’m sure adding in uncertainties to the x component would muck things up further.

      I agree with you that the lag regression analysis is likely a better way to get at the underlying feedback, and that a straight-up regression at this point may not find the true signal. This is something that I’ve been looking into on physical grounds.

      Comment by troyca — September 23, 2011 @ 7:25 am

  10. Thanks Troy.
    Roy Spencer has also reported similar tests and found ols robust in this artificially pure case even with large proportion of normally distributed noise. Indeed it can be shown mathematically that if the error component is gaussian distributed the ols result is the “best unbiased linear estimator”. The problems start when reality rears its ugly head.

    Here is a plot of a Spencer model run with both the ols and the reciprocal of the reverse ols fit.
    http://tinypic.com/r/2cqcv28/7

    This is more representative of a the kind of data you have with error/uncertainty in both variables. Try adding white noise to _both_ axes and see what happens.

    in numbers the two slopes were slope = 3.1438; 1/inv_slope = 33.473 both way off the f/b=6 fed in.

    Even attempts like bisector or goem mean of the two only get ballpark close. (The later is about 10)

    I’m highly surprised your results were anywhere near each other. Maybe the outlier helps pull it in the same direction or maybe I was unclear with what I meant about swapping the axes.

    Unfortunately I was unable to get the data to reproduce your plot. I went to get the ERA data , selected all the boxes as you suggested but I found no button anywhere to actually get the data and the promised link to down load a perl script was nowhere on the page. It seems the site has some problems or maybe is using java or something.

    Comment by P. Solar — September 24, 2011 @ 2:36 am

    • I think they might make you sign up before you can download any data (if so, it always remembers me and I haven’t had to go through it again). Fortunately, you don’t actually need the ERA data from there to reproduce the first plot…my script points to where I get Steve’s ERA CRF that Dessler calculated, and the CERES data is included in the zipped package I link to. Also, for the other plot, I’ve included the calculated global ERA fluxes, so you can modify the script to get them from that text file if you prefer.

      Regarding the results I posted, I should clarify. The slope I posted WAS the inverse slope, so bringing it back to compare the other would be -6.4, which is indeed quite far off. Sorry, I should have specified this.

      Comment by troyca — September 24, 2011 @ 10:35 am

      • Cool, so my guess of -8 was not too far off. Thanks for the clarification.

        Clearly a valid , objective method will produce similar results either way round since there is no reason the choice of axes should affect the result.

        There are a number of ways to at least partially correct for regression dilution but they all require further information about the data which is not that easy to specify in a rigorous way. TLS, principal components and correction factors all need info about relative size of errors, variance of each quantity or divergence from the true slope which is basically the unknown we are looking for in the first place. There’s not short answer which is why people crap out and misuse OLS.

        The fact that there is not clear cut answer is a problem, since when the result does not fit someone’s belief system the whole thing will get bogged down in debate about what method or figure is best.

        The one certitude seems to be that naive OLS is not suitable. Just getting that across to Andy Dessler and Trenberth et al would be a major step forwards. But they’ll fight it tooth and nail because they’ll see where it’s leading.

        Comment by P. Solar — September 24, 2011 @ 2:08 pm

  11. […] anywhere near the supposed debunking warmistas heralded it as. Go figure. Her'e Troy's stuff…. CERES and ERA-Interim fluxes in Dessler 2010 Troy's Scratchpad As always the skeptics are the ones ding calculations, running the statistics, checking […]

    Pingback by Al Gore fakes experiment to show global warming. - Page 30 - Sherdog Mixed Martial Arts Forums — October 12, 2011 @ 5:22 am


RSS feed for comments on this post. TrackBack URI

Leave a comment

Create a free website or blog at WordPress.com.