As I’ve already looked at SB11 in a previous post, now I’ll turn to Dessler 2011, also including the critique Dr. Spencer put up on his blog. This post will just be dedicated to the “Energy Budget Calculation” section of D11, since there is plenty to go over there. All data and code used in this post are available here.

First, I’ll include a couple representations of the same equation. The first is from the Dessler’s video on his 2011 paper, and the second is from Spencer’s blog.

The unknown radiative forcing is represented by R in the Dessler equation, and N in Spencer’s. S and F_ocean are also the same, and this term represents the unknown non-radiative forcing; that is, the flux coming in and out of the deeper ocean layers into the mixed layer (thereby “forcing” surface temperatures), or the flux coming in and out of the atmosphere (this is likely to be much smaller due to the lower heat capacity of the atmosphere). The reason that Spencer has grouped the N – lambda*T terms together is because this represents the TOA flux, which is measured by CERES.

**Criticism #1: The Mixed Layer Depth**

Clearly, much depends on the value chosen for C, the heat capacity of the mixed layer. This is proportional to the depth (since we’re adding more total mass the deeper we include), and so the depth chosen for the mixed layer is crucial. Since we want to know to what degree surface temperatures are “forced” by energy fluxes from the deeper ocean layers, we need to know down to what depth the ocean temperatures are directly tied to the surface temperatures. To determine this, I simply find the correlation between the sea surface temperature (Reynolds SST) and the ocean temperature at each depth (calculated in my last post).

Note that I’m using the CERES era here (2000-2010). For our effective “global” mixed layer (hah!) , it appears to be somewhere between 50 and 75 meters. I also want to pause and just note the curious spike at around 200 meters as well…I’m wondering if this has anything to do with the depth with which the energy from ENSO upwells, but that is for another time.

Anyhow, Spencer uses 25m for this effective mixed layer, which looks to be too low. Dessler says that he uses value of 168 W-month/m^2/K, which is the same as Lindzen and Choi 2011 and is a depth of 100 m, but as we’ll see later (and Spencer notes), it appears he is using an actual depth of 700m. 100 meters is likely on the high side, but 700 meters is way beyond what could be considered the mixed layer in this case.

**Criticism #2: The Error Terms**

In the equation above, we calculate S (or F_ocean) by subtracting the CERES-measured TOA flux anomaly from the difference in ocean heat content (down to the mixed layer depth, divided by the time step). Each of these measurements (from CERES and Levitus) are bound to have some error, but the way we calculate S, all of this error is aliased into the non-radiatively forced term! Since we’re comparing magnitudes by taking the standard deviations in S, there need not be a long-term bias in either measurements to cause a bias in S…I believe even random white noise will do it (I may give this a shot with some synthetic data).

**Running the Numbers**

Now, in the Dessler paper, he mentions that he gets a standard deviation for F_ocean of 9 W/m^2 and 13 W/m^2 for monthly flux anomalies. My first thought was that this appears to be quite a large number(~3 times what you get for a CO2 doubling), and that surely this would mean we’d see more than the ~ 0.1 K (standard deviation) surface temperature fluctuations over this time, even though the fluxes are short-term. For my numbers, for 700m, I got about 5.14 W/m^2, which we could maybe reconcile by the fact that I only have 3-monthly anomalies available, and if Dessler didn’t remove the seasonal components over this CERES time period. However, 700 meters CANNOT be used as the mixed layer depth. As we can see above, or in my last post, the temperature down to 700 meters does not represent the surface temperatures (r^2 ~ 0.05), which we are using to diagnose the climate feedbacks. This is why those flux anomalies don’t appear reasonable and do not show up in surface temperature fluctuations. Using the incorrect 700m flux, I can get a ratio of about 10/1 for sd(S) / sd(N), which is on Dessler’s magnitude:

I’m presuming this is a simple mistake on Dessler’s part, using the 700 meter depth instead of 50 or 100 meter. However, the following quote from Dessler 2011 gives me pause:

The formulation of Eq. 1 is potentially problematic because the climate system is defined to include the ocean, yet one of the heating terms is flow of energy to/from the ocean (F_ocean). This leads to the contradictory situation where heating of their climate system by the ocean (F_ocean > 0) causes an increase of energy in the ocean (C(dTs/dt) > 0), apparently violating energy conservation.

This appears to be a fundamental misunderstanding about the equation. C(dTs/dt) represents the change in heat capacity of the * mixed layer*, while F_ocean represents the flux to/from layers below (deeper ocean) and above (the atmosphere). There is no violation of energy conservation. Furthermore, Dessler’s conflating the mixed layer ocean with the deeper layers of the ocean (considering both terms simply “ocean”), may have led to the mistake of including all 700 meters depth for the “mixed layer”.

To put it another way, we don’t care about the exchange of heat among ocean layers in this case UNLESS they are forcing surface temperature changes. Since the depths below 100m are not tied to the surface temperature changes, the heat exchange from the (for example) 900 m and 700m level should NOT be included in the non-radiative flux term (S) unless it crosses that 100 m barrier, but Dessler’s formulation has included it (assuming he’s using 700m). Furthermore, according to this formulation, ENSO would have no effect on surface temperatures if it caused warm water from the 200m to 700m layer to upwell into the surface layers, so the S term does not even seem to include the major effects of ENSO, which is the primary component of non-radiative forcings over this period!

Anyhow, for my three-month standard deviations in fluxes I get 1.85 W/m^2 for 100 meter depth, which is slightly less than the 2.30 W/m^2 that Spencer calculates. This leads to ratios of between 3.2:1 and 3.8:1 for S/N, much less than Dessler’s number (20:1). However, Spencer mentions on his blog that he gets a ratio of ~ 2:1 for the 100 meter depth, which I am unable to reproduce.

If we use the 50 meter depth, which is on the lower end for mixed layer depth choices, I get a flux standard deviation of about 1.08 W/m^2, and THEN I get a ratio of about 2:1:

This matches up with the Lindzen Choi 2011 paper, but is still a good deal larger than the SB11 estimate of 0.5:1. As I mentioned above, the standard deviation of S might be inflated by errors in the CERES and/or Levitus data, but I doubt it is to that degree. I’ll need to look into that later.

Anyhow, I went back to the Spencer-Braswell 2011 model and saw what effect it would have if we used the updated ratio calculations:

All of the lines, except the purple one, were generated using the ratios mentioned above for the 100 meter and 50 meter layer results. In each case, the feedback is underestimated at zero lag. However, note that none of ratios gives the lagged signature we see in the observations, except for the purple line, which requires that ratio of about 0.67:1 to get there. From what I can tell, that would require a rather large combined error contribution from CERES + Levitus for that relationship to be true.

So, in the interest of finding common ground between Dessler and Spencer, I would tentatively say the following (I reserve the right to change my mind if more information becomes available [i.e. I am shown to be wrong ]):

- Part, but not most, of the surface temperature fluctuations in the last decade have come from unknown radiative forcings (25% – 40%).
- This has led to underestimates of the overall climate feedback using CERES and surface temperatures, and has thus led to overestimates of climate sensitivity using this method.
- The difference in the lagged signatures between the observations and GCMs are more likely the result of improperly modeled ENSO variations than they are the result of unknown radiative forcings. (pending further review of the rest of Dessler 2011) .

__Update 9/19__

For those of you questioning whether Dessler11 actually does incorrectly include all the way down to the 700 meter layer for his Argo data calculations, note the following from this paper:

This can be confirmed by looking at the Argo ocean heat concent data covering 2003-2008. Using data reported in Douglass and Knox [2009], the month-to-month change in monthly interannual heat concent anomalies can be calculated (sd = 1.2 x 10^22 J/month).

The reference leads to this paper, which notes under section 3.1:

A new system was deployed in 2000 consisting in part of a broad-scale global array of temperature/salinity profiling floats, known as Argo [20]. Monthly values of Argo HO were determined

from data to a depth of 750 m. Values from July 2003 to January 2008 are given by W08 and are listed in Table S-1.

Bold is mine. If you have paywall access, you can simply download the supplementary data for table S-1, use the Argo columns, diff them (to get the change in OHC) and remove seasonal effects, and you’ll get the value of 1.2 for the standard deviation that Dessler gets above. I would also note that it appears Dessler11 doesn’t actually attempt to calculate S(t), but instead simply tries to determine the standard deviations, as mentioned by Paul_K in a comment at the Air Vent.

Troy. your Figure 1 (unnamed) in Criticism #1 https://troyca.files.wordpress.com/2011/09/sst_oceantemp_depths.png appears to be a response curve. You say “Clearly, much depends on the value chosen for C, the heat capacity of the mixed layer. This is proportional to the depth (since we’re adding more total mass the deeper we include), and so the depth chosen for the mixed layer is crucial. ” I think I would consider it to be proportional to heat capacity, input, and response of the system. It is not proportional to depth; it is a measure of the response. In this case depth is approxiamtely equal to time to diffuse the heat in an infinite sink. I htink this is along the lines of Bart’s and MarkT’s comments on CA.

Just a thought.

Comment by John Pittman — September 16, 2011 @ 2:11 pm

John,

I think my post might have been unclear. What I mean is that C (the heat capacity) is proportional to the depth chosen, simply because C = specific heat * mass, and assuming a constant area and constant water density (which is roughly okay here), using a “mixed layer” from 0-700m will have approximately 7 times the heat capacity of a “mixed layer” from 0-100m, simply because it has 7 times the volume (and mass).

I think I made this confusing because of the figure which follows, in which depth does NOT refer to the average temperature from 0 to depth, but rather refers to the average temperature as measured from floats at that actual depth.

Does that clear things up? If so, I’ll see if I can improve the phrasing in the post.

Comment by troyca — September 16, 2011 @ 2:48 pm

Sorry the I r^2 was a bit vague to my glass enaided eyes. But I would caution you that with a response and an area (equals a constant volume as you go down in depth), this is only true at equilibrium. As some of the climate scientists and their critics would remind you, it is psuedo equilibrium because there is a claim, right, wrong, or not either, of an estimate of a REPSONSE, like climate sensitivity or some sort.

You cannot have such a control volume, with a feedback response, and claim expicitly that such a curve means always such and such. If you claim that “”C (the heat capacity) is proportional to the depth chosen, simply because C = specific heat * mass, and assuming a constant area and constant water density (which is roughly okay here)”” you assume a constant all of the above, except depth, it should be a straight line to about 400 meters per your graph not 700. Or accouting diffusion of heat m^2/s, it should be asymtopic with respect to 400 meters per your graph.

I assumed that the SST was constant. If not see comments above. But Bart and MarkT are more expienced and knowledgable than I. Perhaps, we could ask them to join our conversation.

Comment by John Pittman — September 16, 2011 @ 6:34 pm

WARNING I don’t always use a spell checker.

Comment by John Pittman — September 16, 2011 @ 6:35 pm

John,

I’m afraid I still don’t understand specifically what you disagree with. I understand that the specific heat will vary slightly with temperature, salinity, etc, but this is still rather small relative to the total heat capacity. The value used for heat capacity in LC11 (and Dessler11 uses in one part, but not the Argo measurements) is 168 W-months/m^2/K, which corresponds to a depth of approximately 100 meters. Would you agree that including everything down to a 700 meter depth would mean roughly a heat capacity of about 7 times that (it will be slightly different at different times of the year and in different locations, and the area at the 700 m layer is slightly smaller due to the sea floor in certain spots, but I’m only talking approximately here)?

The graph I put up is illustrating a *different* point entirely, unrelated to heat capacity. By definition, the “mixed layer” should have approximately a uniform temperature. We can see at which depth the temperature measurements are not longer in sync with the surface by determining the correlation between the temperature at each depth and those at the surface. I merely note that somewhere between 50 and 75 meters, the temperatures stop being in sync with the surface, and so this might be a good depth for the “mixed layer”. Do you disagree with this method of determining the mixed layer depth?

I would welcome Mark T and/or Bart’s comments here, although I’m afraid I don’t specifically see how that topic is related. I want to reiterate that the figure is NOT trying to prove anything about the relationship between heat capacity and depth.

Comment by troyca — September 19, 2011 @ 9:34 am

Troy, thanks for another great post. I have been following along since your July post on D10.

Comment by Layman Lurker — September 18, 2011 @ 1:40 pm

Thanks, LL.

Comment by troyca — September 19, 2011 @ 9:35 am

What is really needed here is the Green’s function. Wiki has it here in the section headed

“homogeneous initial conditions and non-homogeneous Dirichlet boundary conditions”. h(t) is SST as a function of time and u(x,t) is the temperature at depth. You can integrate over x to get another Green’s function relating just heat content to SST.Even this is much over-simplified, because it assumes uniform conductivity, which is very much not true. But it indicates what is wrong with prescribed levels, whether 25m or 700m. There’s just a continually evolving profile over time, and it isn’t anything like uniform up to a level where it then just stops.

That’s not to say that either Spencer or Dessler should just give up. You need to do something with the information you can get.

Comment by Nick Stokes — September 19, 2011 @ 5:23 pm

Thanks Nick, but my understanding is that the heat exchange within the “mixed layer” is primarily the result of mechanical mixing (convection), not top-down heat diffusion through conduction? The “mixed layer” then achieves its uniform temperature far quicker than if we were simply seeing the heat conducted through water. However, I confess to being a bit over my head here, and I agree that there is no “set” level for the mixed layer. But clearly there is some (albeit slightly variable) heat capacity associated with the time-frame of these observations that can be used as a decent estimator in that 1-D model, and as critics were quick to point out, the 25 m depth associated with Spencer probably isn’t it. Neither is the 700 meters used by Dessler.

Comment by troyca — September 21, 2011 @ 7:31 am