Troy's Scratchpad

March 8, 2012

Aerosols, Ocean Heat Uptake, and Climate Sensitivity

Filed under: Uncategorized — troyca @ 7:33 am

As those who have read the oft-cited Kiehl (2007) may know, the uncertainty surrounding the magnitude of the aerosol forcing means that models with very different climate sensitivities can match the temperature record over the 20th century.  In fact, the uncertainty in this forcing is one of the primary hindrances in determining climate sensitivity, because the larger the assumed magnitude of the aerosol forcing (that is, the more negative it is it offset the positive GHG forcing), the more sensitive a model must be in order to explain the warming.  It should be noted that the evolution of the temperature response can generally be determined as a function of the climate sensitivity, forcing history (of which aerosols are the primary question mark for the last ~130 year), and ocean heat uptake efficiency, so that without constraining the latter two variables it is impossible to constrain equilibrium climate sensitivity [from century-scale temperatures].  In this post, I will focus on the effect that different assumptions about the aerosol forcing will have on an observationally-based estimate of equilibrium climate sensitivity.

In particular, I will use the same equation we’ve seen many times here (in Forster and Gregory 2006, for example), which is:

N = F – λT

Where N is the net TOA radiation, F is the forcing, and T is the temperature perturbation. λ is thus the strength of radiative restoration, from which we derive ECS (2xCO2) as 3.7 (W/m^2) / λ.  See my “Radiation Budget and Climate Sensitivity” page on the right for more information.

As in my last post, I will derive the net TOA imbalance (N, although technically I only require the anomaly, not the absolute imbalance here) from the recently-posted Ocean Heat Content down to 2000m from NODC.  I do this by using overlapping 10-year regressions of dOHC/dT.

For the F component, I use the GISS model forcings.  In order to see the impact of the anthropogenic aerosol forcing on estimated sensitivity, I separate them out (black carbon, direct effect, indirect effect) from the other forcings, then multiply them by an efficacy factor.  This keeps the evolution of the aerosol forcing the same, while simply varying the magnitude of the effect.

For the T component, I simply use GISTemp, although I used NCDC as well and didn’t see much of a difference.  Both the T and F components are smoothed over 10 years to match up with the corresponding TOA imbalance (N).  Here is what the time series of those three components looks like:

TOA_F_N_T

There are a few benefits to this approach, I think.  For one, we have a longer record than in FG06 or Murphy et al. (2009), and the results may be more indicative of actual sensitivity than simple ENSO variations.  Along those lines, by using the 10-year averages of temperature perturbation, forcing, and net TOA radiation, we are implicitly including some of the longer term feedbacks (e.g. ice albedo) instead of just the instantaneous ones.

Of course, there are quite a few caveats as well.  The coverage prior to 2000 at these depths was pretty sparse, although we are using a coarser resolution here.  Furthermore, although a large majority of the heat uptake occurs in the ocean 0 – 2000m during these timescales, there are other factors as well – land, atmosphere, ice, and abyssal ocean – which are not taken into account.  Finally, in this demo I am using overlapping ten year intervals while still using simple OLS regression without taking into account auto-correlation, so this may have an effect, although I would expect the bigger impact to be in the confidence intervals rather than the likely estimates (which is what I’m showing here).

Anyhow, here is the resulting graph of diagnosed sensitivity versus the aerosol forcing (in 2010):

10yr_sensitivity_vs_aerosol

The dashed, colored lines represent a small sample of anthropogenic aerosol estimates since AR4.  The R&C paper mentioned is Ramanathan and Carmichael (2008), but the length of the names was messing up my legend.   In that bunch I could have included Quaas et al. (2009), which determined the short-wave aerosol effect to be –1.15 +/- 0.43 W/m^2, and if you include a likely positive long-wave effect it will presumably be right around Quaas et al. (2008) or higher.   Q09 also notes that the median aerosol forcing used by the models examined is –1.5 W/m^2, and that they seem to overestimate the strength of the aerosol indirect effect relative to satellite estimates.

Obviously, the assumption about the aerosol effect here will have an effect on the diagnosed sensitivities.  In this case, the assumption among our choices does not have as much of an impact, because the uncertainty is on the lower, flatter portion of the sensitivity curve.  I would suggest that the sensitivity diagnosed here is generally less than that of the GCMs, even for the same aerosol forcing, because of the reason described in Hansen et al. (2011) – the ocean heat uptake efficiency is too high in models.  I hope to look at that paper a bit in another post, but I will mention that the way H11 suggests a similar or higher sensitivity than the models while acknowledging the overestimate of the ocean heat mixing term in models is to postulate an even higher aerosol forcing (the inference for this forcing, I would suggest, is not nearly on par with the more empirical methods for determining the aerosol effect, but more on that in a later post).

Oddly, the total GISS anthropogenic aerosol forcing in 2010 is even more negative than the Hansen (2011) estimate.  Similarly, a higher ECS sensitivity model like GFDL CM2.1 appears to use aerosol forcing that is < –2.0 W/m^2, according to Quaas et al. (2009).  Obviously, from the chart above, once you start moving left (with larger efficacy for aerosols), your diagnosed sensitivity can be strongly impacted.

Script is available here.  Note: I’ve found that you may need to navigate to the NODC OHC data manually  (i.e., paste in the link) in your browser before it will allow the script to automatically grab the data.

Update (3/24): After experimenting with some simulated noise in the 2000m OHC data, and thinking more about this, I believe there are some serious shortcomings of the analysis I presented above.  For one, even using 10 year regressions in that 2000m data does not do well to get a good diagnosis of average TOA imbalance over the period, given the huge amount of uncertainty in the 2000m OHC data prior to the most recent decade (based on the NODC uncertainty intervals).  But more importantly, even if it did, using the average 10-year intervals does not distinguish between the extreme and small responses to the several large volcanic eruptions, as the more sensitive models might show a larger TOA imbalance for the GHG increase and a more negative response to the volcanoes, which averages out over 10-years to about what you’d get with a smaller response to both the positive GHG forcing and negative volcanic forcing.  We would essentially need a longer period without volcanic eruptions, or good enough OHC data that we could diagnose annual TOA imbalance, if we wanted to use this method to determine sensitivity.  But the curve of aerosol forcing vs. diagnosed sensitivity should still give an example of the approximate relationship between the two.

Blog at WordPress.com.