As I’ve already looked at SB11 in a previous post, now I’ll turn to Dessler 2011, also including the critique Dr. Spencer put up on his blog. This post will just be dedicated to the “Energy Budget Calculation” section of D11, since there is plenty to go over there. All data and code used in this post are available here.
First, I’ll include a couple representations of the same equation. The first is from the Dessler’s video on his 2011 paper, and the second is from Spencer’s blog.
The unknown radiative forcing is represented by R in the Dessler equation, and N in Spencer’s. S and F_ocean are also the same, and this term represents the unknown non-radiative forcing; that is, the flux coming in and out of the deeper ocean layers into the mixed layer (thereby “forcing” surface temperatures), or the flux coming in and out of the atmosphere (this is likely to be much smaller due to the lower heat capacity of the atmosphere). The reason that Spencer has grouped the N – lambda*T terms together is because this represents the TOA flux, which is measured by CERES.
Criticism #1: The Mixed Layer Depth
Clearly, much depends on the value chosen for C, the heat capacity of the mixed layer. This is proportional to the depth (since we’re adding more total mass the deeper we include), and so the depth chosen for the mixed layer is crucial. Since we want to know to what degree surface temperatures are “forced” by energy fluxes from the deeper ocean layers, we need to know down to what depth the ocean temperatures are directly tied to the surface temperatures. To determine this, I simply find the correlation between the sea surface temperature (Reynolds SST) and the ocean temperature at each depth (calculated in my last post).
Note that I’m using the CERES era here (2000-2010). For our effective “global” mixed layer (hah!) , it appears to be somewhere between 50 and 75 meters. I also want to pause and just note the curious spike at around 200 meters as well…I’m wondering if this has anything to do with the depth with which the energy from ENSO upwells, but that is for another time.
Anyhow, Spencer uses 25m for this effective mixed layer, which looks to be too low. Dessler says that he uses value of 168 W-month/m^2/K, which is the same as Lindzen and Choi 2011 and is a depth of 100 m, but as we’ll see later (and Spencer notes), it appears he is using an actual depth of 700m. 100 meters is likely on the high side, but 700 meters is way beyond what could be considered the mixed layer in this case.
Criticism #2: The Error Terms
In the equation above, we calculate S (or F_ocean) by subtracting the CERES-measured TOA flux anomaly from the difference in ocean heat content (down to the mixed layer depth, divided by the time step). Each of these measurements (from CERES and Levitus) are bound to have some error, but the way we calculate S, all of this error is aliased into the non-radiatively forced term! Since we’re comparing magnitudes by taking the standard deviations in S, there need not be a long-term bias in either measurements to cause a bias in S…I believe even random white noise will do it (I may give this a shot with some synthetic data).
Running the Numbers
Now, in the Dessler paper, he mentions that he gets a standard deviation for F_ocean of 9 W/m^2 and 13 W/m^2 for monthly flux anomalies. My first thought was that this appears to be quite a large number(~3 times what you get for a CO2 doubling), and that surely this would mean we’d see more than the ~ 0.1 K (standard deviation) surface temperature fluctuations over this time, even though the fluxes are short-term. For my numbers, for 700m, I got about 5.14 W/m^2, which we could maybe reconcile by the fact that I only have 3-monthly anomalies available, and if Dessler didn’t remove the seasonal components over this CERES time period. However, 700 meters CANNOT be used as the mixed layer depth. As we can see above, or in my last post, the temperature down to 700 meters does not represent the surface temperatures (r^2 ~ 0.05), which we are using to diagnose the climate feedbacks. This is why those flux anomalies don’t appear reasonable and do not show up in surface temperature fluctuations. Using the incorrect 700m flux, I can get a ratio of about 10/1 for sd(S) / sd(N), which is on Dessler’s magnitude:
I’m presuming this is a simple mistake on Dessler’s part, using the 700 meter depth instead of 50 or 100 meter. However, the following quote from Dessler 2011 gives me pause:
The formulation of Eq. 1 is potentially problematic because the climate system is defined to include the ocean, yet one of the heating terms is flow of energy to/from the ocean (F_ocean). This leads to the contradictory situation where heating of their climate system by the ocean (F_ocean > 0) causes an increase of energy in the ocean (C(dTs/dt) > 0), apparently violating energy conservation.
This appears to be a fundamental misunderstanding about the equation. C(dTs/dt) represents the change in heat capacity of the mixed layer, while F_ocean represents the flux to/from layers below (deeper ocean) and above (the atmosphere). There is no violation of energy conservation. Furthermore, Dessler’s conflating the mixed layer ocean with the deeper layers of the ocean (considering both terms simply “ocean”), may have led to the mistake of including all 700 meters depth for the “mixed layer”.
To put it another way, we don’t care about the exchange of heat among ocean layers in this case UNLESS they are forcing surface temperature changes. Since the depths below 100m are not tied to the surface temperature changes, the heat exchange from the (for example) 900 m and 700m level should NOT be included in the non-radiative flux term (S) unless it crosses that 100 m barrier, but Dessler’s formulation has included it (assuming he’s using 700m). Furthermore, according to this formulation, ENSO would have no effect on surface temperatures if it caused warm water from the 200m to 700m layer to upwell into the surface layers, so the S term does not even seem to include the major effects of ENSO, which is the primary component of non-radiative forcings over this period!
Anyhow, for my three-month standard deviations in fluxes I get 1.85 W/m^2 for 100 meter depth, which is slightly less than the 2.30 W/m^2 that Spencer calculates. This leads to ratios of between 3.2:1 and 3.8:1 for S/N, much less than Dessler’s number (20:1). However, Spencer mentions on his blog that he gets a ratio of ~ 2:1 for the 100 meter depth, which I am unable to reproduce.
If we use the 50 meter depth, which is on the lower end for mixed layer depth choices, I get a flux standard deviation of about 1.08 W/m^2, and THEN I get a ratio of about 2:1:
This matches up with the Lindzen Choi 2011 paper, but is still a good deal larger than the SB11 estimate of 0.5:1. As I mentioned above, the standard deviation of S might be inflated by errors in the CERES and/or Levitus data, but I doubt it is to that degree. I’ll need to look into that later.
Anyhow, I went back to the Spencer-Braswell 2011 model and saw what effect it would have if we used the updated ratio calculations:
All of the lines, except the purple one, were generated using the ratios mentioned above for the 100 meter and 50 meter layer results. In each case, the feedback is underestimated at zero lag. However, note that none of ratios gives the lagged signature we see in the observations, except for the purple line, which requires that ratio of about 0.67:1 to get there. From what I can tell, that would require a rather large combined error contribution from CERES + Levitus for that relationship to be true.
So, in the interest of finding common ground between Dessler and Spencer, I would tentatively say the following (I reserve the right to change my mind if more information becomes available [i.e. I am shown to be wrong ]):
- Part, but not most, of the surface temperature fluctuations in the last decade have come from unknown radiative forcings (25% – 40%).
- This has led to underestimates of the overall climate feedback using CERES and surface temperatures, and has thus led to overestimates of climate sensitivity using this method.
- The difference in the lagged signatures between the observations and GCMs are more likely the result of improperly modeled ENSO variations than they are the result of unknown radiative forcings. (pending further review of the rest of Dessler 2011) .
For those of you questioning whether Dessler11 actually does incorrectly include all the way down to the 700 meter layer for his Argo data calculations, note the following from this paper:
This can be confirmed by looking at the Argo ocean heat concent data covering 2003-2008. Using data reported in Douglass and Knox , the month-to-month change in monthly interannual heat concent anomalies can be calculated (sd = 1.2 x 10^22 J/month).
The reference leads to this paper, which notes under section 3.1:
A new system was deployed in 2000 consisting in part of a broad-scale global array of temperature/salinity profiling floats, known as Argo . Monthly values of Argo HO were determined from data to a depth of 750 m. Values from July 2003 to January 2008 are given by W08 and are listed in Table S-1.
Bold is mine. If you have paywall access, you can simply download the supplementary data for table S-1, use the Argo columns, diff them (to get the change in OHC) and remove seasonal effects, and you’ll get the value of 1.2 for the standard deviation that Dessler gets above. I would also note that it appears Dessler11 doesn’t actually attempt to calculate S(t), but instead simply tries to determine the standard deviations, as mentioned by Paul_K in a comment at the Air Vent.