In a previous post on Dessler 2011, I commented that the paper had a few important errors within it, most notably an incorrect use of the ocean mixed layer heat capacity to show that the forcings from ocean heat transport dominated that of the unknown radiative forcings when it comes to surface temperature changes. It has since been published, In that previous posting I noted (although Dr. Spencer was there first) that Dessler 2011 incorrectly used the 700 meter layer (down to 750m) from the Argo data, which could be confirmed by downloading the Douglass and Knox data. That fact has now been mentioned, with text explicitly added, in the published version:
“This can be confirmed by looking at the Argo ocean heat content data covering the top 750 m of the ocean over the period 2003–2008”.
I don’t want to go over the same ground as the previous post, but I will quickly note the following on why this is incorrect (slightly modifying one of my own comments from over at Bart’s):
The heat that ENSO distributes from the lower layers to the mixed layer is primarily what Dessler and Spencer are trying to estimate with with the “non-radiative” forcing. But the way Dessler has set it up, heat redistributed from the 110-400 m layers to the mixed layer during ENSO is NOT counted towards this forcing, but heat redistributed from the 800 meter layer to the 700 meter layer (even though the 700 meter layer is uncorrelated with surface temperature changes) IS counted. Clearly, Dessler’s formulation is problematic.
The above image shows the relationship between the SST and the temperature measured by the floats at different depths for three-month anomalies (from 2000-2010). As one can see, the top 100 meters or so show a positive relationship between SST and the temperature at those depths, whereas from 110 – 400 meters we see the opposite. This is because during El Nino events, the heat leaves the 110m to 400m layer to make its way into the upper layers, and the opposite is true during La Nina events. Clearly, beyond 400m, the heat lost or gained in those layers does not show much of a relationship with SST when it comes to seasonal variations.
There is, however, another method used by Dessler 2011 to estimate the monthly fluxes, which involves simply taking a set heat capacity (presumably the mixed layer) and multiplying it by the monthly fluctuations in SST:
To evaluate the magnitude of the first term, C(dTs/dt), I assume a heat capacity C of 168 W-month/m^2/K, the same value used by LC11 (as discussed below, SB11’s heat capacity is too small). The time derivative is estimated by subtracting each month’s global average ocean surface temperature from the previous month’s value.
The choice for this heat capacity is particularly important, given that Murphy and Forster 2010 used a similar method in their response to Spencer and Braswell 2008. Science of Doom has been looking into this as well, and gives some background on the origins of why this is important related to the original Forster and Gregory topic.
So, why do we get such radically different results when using using 168 W-month/m^2/K times SST differences (~ 9 W/m^2 per month) versus actual measurements of the top 100 meters in Argo ( ~ 2 W/m^2 per month)? Well, looking at the figure above, one reason should be fairly clear – a change in global SST does not result in a uniform change in all layers globally down to 100 m layer on these short time scales; the graph would show a regression coefficient of about 1 all the way to 100 meters if that is the case. Since in fact the regression coefficient falls off pretty quickly after 50 meters (likely because parts of the globe don’t have a mixed layer depth deeper than 50 m), using a single constant heat capacity equivalent to 100 meters of water will overestimate the heat changes.
So, where does this 168 W-month/m^2/K come from? In our conversation over at Bart’s, Eli Rabett notes the following:
Dessler and Lindzen and Choi, and Schwartz and etc.’s heat capacity of 168 W-month/m^2/K is the standard choice corresponding to a depth of ~ 100 m as you say. Everyone is Galileo
Which was helpful in that it led me to Schwartz 2007, who appears to be the originator. In there, he notes:
The present analysis indicates that the effective heat capacity of the world ocean pertinent to climate change
on this multidecadal scale may be taken as 14 ± 6 W yr m^-2 K^-1. The effective heat capacity determined in this way is equivalent to the heat capacity of 106 m of ocean water or, for ocean fractional area 0.71, the top 150 m of the world ocean. This effective heat capacity is thus comparable to the heat capacity of the ocean mixed layer.
The 168 appears to come from 14 W-yr/m^2/K * 12 months/year. But the regression in S07 was performed to estimate the heat capacity on multi-decadal scales, when heat slowly moves into the deeper ocean, not merely the short-term monthly, seasonal, or even annual variations during 10 year periods! Indeed, even Murphy and Forster 2010 note:
An appropriate depth for heat capacity depends on the length of the period over which Eq. (1) is being applied (Watterson 2000; Held et al. 2010).
But they seem to proceed with the 100m derived for a much longer period. We can get this approximate 14 W-yr/m^2/K as follows:
Specific heat of water * mass per m^2 / seconds per year=
4.18 (J/g/K) * (1000 g/kg) * (1000 kg / m^2) * (106 m depth) / [ (365.25 days/ year) * 24 (hrs/day) * (3600 s/day)] = 14 W-yr/m^2/K
I’ll note a few things in passing: first, that 4.18 is technically for freshwater, not sea water. Furthermore, at lower depths we start hitting some sea bottom, so the area fraction of the lower layers is not quite as large as that of the higher layers, although this should not make much of a difference in those first 110 meters. However, most importantly, we see that multiplying the results of that 14 W-year/m^2/K regression by 12 and applying this to monthly values would assume that the heat capacity is the same for any length of period, and that annual change in energy would be about 12 times that of the monthly change in energy, which is simply not the case when you have the monthly and seasonal fluctuations up and down. Indeed, using the outputs of ECHAM-MPI model (which will be free of the measurement noise that would affect short-term actual measurements), the three-month heat changes during the ENSO dominated periods are only about 2 times that of the single-month changes, and the annual heat changes are only about 4 times that of single month changes.
Since we’re interested in the seasonal and annual responses during a 10 year period (as in Forster and Gregory 2006 or Murphy et. al 2009), we can use this most recent decade for a new regression. Here I calculate a more relevant relationship between the diff of SST (dTs/dt) and the actual measure OHC flux (dH/dt) of the top 110 meters. As seen below, we get a regression coefficient of approximately 32 for the three month anomalies, which (if I’m not screwing this up) would theoretically correspond to about a 60m mixed layer depth appropriate for this time period (and indeed, on a monthly scale, 40m is more appropriate).
All of this ignores the fact that there are likely to be measurement errors within the ocean temperatures as well, and that, at one month, those errors will affect the average more than 3 months, and certainly more than a year. In discussions over at Isaac Held’s place, it’s becoming clearer to me – based on model estimates of that regression coefficient – that using monthly anomalies are not effective for calculating the feedbacks, and that we’ll probably need to look more at annual fluxes to reduce the noise (both measurement and atmospheric) when calculating that sensitivity. On those time scales, at least according to Dr. Spencer’s recent post, it appears that the “unknown” radiative forcing can contribute 60% of the forcing necessary for the ocean heat changes.
Taking a look at the control runs in Dessler 2011 (figure 2) at 0 lag, the models seem to show an average coefficient of around 0.5, which corresponds to a sensitivity per doubling of CO2 of about (3.8/0.5) = 7.6 C, when in fact we know the average sensitivity in those models are around 3 C. This would suggest that the method of Murphy et al. 2009 and Forster and Gregory 2006 using seasonal anomalies (of flux vs temp changes) shows no linear relationship with the actual sensitivity, or, if there is a relationship, it is a severe overestimate (applying this to the results from observations, correcting for such a theoretical underestimate would yield an ECS of ~ 1.3 C per doubling of CO2, not even accounting for underestimate caused by the issues in SB08/SB1). However, I tend to think that it is not an accurate way to diagnose feedback, as many of the regression coefficients are near 0. What remains to be seen is if using annual anomalies of TOA flux and temperature changes will yield better estimates.
Code and data to reproduce the figure available here.