## October 11, 2013

### Relative importance of transient, effective, and equilibrium sensitivities

Filed under: Uncategorized — troyca @ 8:11 pm

### Introduction

I have heard it mentioned that the transient sensitivity (or transient climate response, TCR) is more relevant to climate policy than equilibrium climate sensitivity, and wanted to see the degree to which this is true.  AR4 notes that:

Since external forcing is likely to continue to increase through the coming century, TCR may be more relevant to determining near-term climate change than ECS.

Also, see this very interesting paper by Otto et al., (2013) ERL.  In particular, I wanted to see the relative importance of different types of sensitivity (transient, "effective", and equilibrium) when determining the temperature anomaly in the year 2100 relative to 1850.  For a recap of these types of sensitivities, let’s go back to our simple energy balance model:

C(dT/dt) = F – ΔT*λ

Here, C represents the heat capacity of the system, T represents the surface temperature, F represents the forcing, and λ the strength of radiative restoration.  The TCR refers to the change in temperature in the 1% CO2 increase per year idealized scenario at the time when the concentration doubles, which occurs at 70 years (1.01^70 = 2.0).  Given the large heat sink that is the ocean, there will still be a residual imbalance at the top of the atmosphere after this 70 years.  This means that TCR depends not only on λ, but also on the heat uptake of things like the ocean, which could be represented here in the C term (although note that C varies over time as heat is captured by deeper layers of the ocean, which is why a one-box model is not great for simulating short and long-term transient responses).

For the equilibrium sensitivity, we also use the 2xCO2 scenario, but this time see how much the temperature changes after radiative equilibrium is reached at the top of atmosphere.  Now, in the idealized situation where λ is unchanging, we can calculate the temperature at equilibrium independent of the heat capacity of the system, as simply ΔT = F / λ.  It is this situation where we use a single-value of λ to calculate ΔT that the ΔT is referred to as "effective sensitivity".  Here, we’ll define λ_250 as the radiative restoration strength from 1850-2100, and ΔT = F_2xCO2 / λ_250 will be our "effective sensitivity".  This differs from ECS in that λ could theoretically change over subsequent centuries, as in Armour et al., 2012.  However, defined this way, ECS is irrelevant for this century’s warming, and such a discussion is more just academic (which is why the distinction between ECS and "effective sensitivity" has been minimized in the past).

So, in this way, I will technically be comparing the importance of TCR vs. effective sensitivity (EFS), although the latter is sometimes used synonymously with ECS.

### Method

Here, I will be employing the same two-layer model as in my last post.  Since EFS is determined by λ, it is easy to prescribe in this model.  However, the TCR depends not only on the EFS, but the heat transfer of the ocean as well.  To match the TCR to a target for a given EFS, I fix the "shallow" and "deep" layers at 90m and 1000m respectively, and use a brute force method (ugly, I know) to determine the rate of heat transfer between these layers that will result in this TCR.  From there, I have a model that simulates both the specified TCR and EFS, which I can then use to see the temperature rise from 1850 to 2100 using this model.

I have chosen TCRs of 1.1K, 1.4K, 1.8K, and 2.1K, and EFSs of 1.5K through 4.5K, based on what is feasible/possible for a given TCR.  Again, the adjusted forcings for the RCP scenarios come from Forster et al. (2013).

Script available here

### Results

This first graph is for the RCP6.0 scenario:

As you can see, the relatively flat lines (small slope) indicates that given a specific TCR, the EFS has only a small effect on the temperature that might be expected in 2100.  On the other hand, the large amount of space between the different TCR lines, despite only small changes in the magnitude of TCR, indicates that two models can have the same EFS but produce very different magnitudes of warming by 2100 if their TCRs differ.

To quantify a bit more, the slope of the TCR-2.1 line is 0.16K per EFS, meaning that if the earth had a transient response of 2.1K, the difference between an EFS of 3.0 and an EFS of 4.0 would only produce an extra 0.16K in 2100K.  On the other hand, if we fixed the EFS at 3.0 and instead changed TCR, you can see that an increase between TCR of 1.4K and 2.1K (0.7K) produces a change of the same magnitude (~0.7K) in the expected temperature change by 2100, producing a ratio of 1.0 K per TCR.  A similar difference is found at 2.0 EFS when moving from 1.1K to 1.8K.  This would suggest that it is much more important to pin down the TCR (which has a large impact on this century’s warming), even if EFS remains uncertain, rather than trying to pin down EFS.  I should note, however, that for the low TCR scenario (1.1K), the slope of the line is ~0.34 K per EFS, which is double that at 2.1K.  This is because a low TCR suggests a lower heat capacity of the system, which in turn means a quicker pace to equilibrium, which means a greater importance of EFS.

For another look, here is the RCP4.5 scenario:

Here, the results are similar to the RCP6.0, although not quite as drastic.  For a given EFS (2.0 or 3.0), the change in 2100 warming is altered by about 0.75 K per TCR (rather than 1.0 K per TCR in the 6.0 scenario).  Part of this is because we only have ~3/4 of the forcing change (4.5 W/m^2 rather than 6.0 W/m^2), so obviously the warming in general by 2100 is decreased.  However, this alone does not explain why the slopes for the TCR-1.1 and TCR-2.1 have increased to 0.4 K / EFS and 0.2 K / EFS respectively.  For that, I suggest that the earlier stabilization of the 4.5 scenario forcing lends a slightly increased importance to the EFS, given the ~30 years it has to move towards that equilibrium prior to 2100.

### Conclusion

Clearly, TCR seems to be more important that EFS and (definitely more so than ECS) in determining the expected warming by 2100.  I tend to agree that it terms of policy, this suggests more focus should be placed on pinning down the TCR.

## September 2, 2013

### How much more warming would we get if the world stopped emissions right now? Dependence on sensitivity and aerosol forcing.

Filed under: Uncategorized — troyca @ 11:34 am

The question of how much more warming is "in the pipeline" or we are "committed to", if we stopped emissions immediately, is an interesting but not quite trivial one to answer.  Sure, there are GCMs that can run this scenario, but given their seeming inability to correctly reproduce the effective sensitivity relevant on these timescales (Masters, 2013) or the magnitude of the aerosol forcing (as appears to be the case in AR5 drafts), it would be nice to see how these factors influence the amount of warming we could expect under this scenario.

As part of a larger project, I have been developing some much simpler models that simulate the properties of the CMIP5 multi-model-mean under the various RCP scenarios up to the year 2100, but with the ability to enter specific values for parameters like effective sensitivity, current aerosol forcing, and emissions trajectories that have the largest impact on the warming.  I thought this scenario would be an interesting test for this portion of it (you might notice my R code in this case has the distinctive feel that it has been ported from a different language).

For the first part, we need to know how the temperature responds to forcings.  Obviously, one parameter we will take into account is "effective sensitivity", or rather its inverse in the form of radiative restoration strength – this is the value provided by the "user" or the model.  While the radiative restoration strength is not necessarily a constant, as discussed in previous posts, we want to take a value that is relevant on the century scale.  However, even if the user prescribes this value, we still need to know how fast the temperature will approach this value for a given forcing evolution.  For that part, I attempted to "reverse engineer" the values for the CMIP5 multi-model mean (which is itself a hodge-podge of various runs from various models), by using 1.18 W/m^2/K for radiative restoration (3.1K Effective Sensivity), as I found in my paper, and proceeded to fit the remaining parameters based on the adjusted forcings and temperature evolution of the 4 RCP scenarios as shown in Forster et al. (2013).  While a one-box model did not quite give a great fit across the various scenarios, using a 2-layer model as described in Geoffroy et al. (2013)  seems to do the trick.

For the second part, we need to know the actual forcing evolution.  This too requires some reverse engineering and simplifying assumptions.  In this case, since we are stopping emissions immediately, those species with atmospheric lifetimes << 1 year can see their forcings drop to 0 immediately (aerosols, O3).  For other species, I assumed a constant "effective atmospheric lifetime" and atmospheric fraction, again fitting these values based on the RCP scenarios.  This yielded an effective lifetime of 12 years for CH4 and 185 years for CO2.  Since N20 and the main contributing CFC have similarly long lifespans, I simply lumped these smaller effects in with CO2 by slightly increasing the CO2 forcing for a given concentration (again, I don’t want my "simple" model to require the user to enter 10 different emissions trajectories!).

Here are the resulting forcing evolutions for the major greenhouse gases were emissions to stop today:

As you can see, due to the longer lifetime of CO2, the decrease in forcing is rather slow over time.  On the other hand, methane drops off rather quickly.  Contrary to the GHG forcing, we would likely see a large jump *up* in forcing from aerosols if we stopped emitting immediately…the magnitude of that jump up obviously depends on the magnitude of the current aerosol negative forcing.

Below are the temperature evolutions for different values of effective sensitivity and aerosol forcing.  For effective sensitivity, I will show the value of 1.8K (as found in my paper), as well as 3.1 K (as I found for the CMIP5 models).  For the aerosol forcing, I will use the values of -1.3 W/m^2 (AR4) and -0.9 W/m^2 (last draft of AR5).  Note that this additional warming chart is only taking into account anthropogenic forcings…it does not include the warming or cooling influences that may come from natural forcings in the future.

As you can see, all of them have a "bump" within the next 10 years, which represents the immediate increase in forcing due to the drop-off of aerosols.  After that, the temperatures begin to drop as the methane concentrations and CO2 concentrations begin to decrease.  However, the magnitudes are very different!  In these scenarios, choosing different values of sensitivity and aerosol forcing (from what I consider realistically possible values), can mean the difference between 0.6 K and 0.2K warming in the pipeline.  To further highlight this dependence, I will show the more "extreme" values based on the edges of uncertainty of sensitivity and aerosols:

Code and data.

## August 20, 2013

### Points where I don’t find Andrew Dessler’s >2C ECS video to be convincing

Filed under: Uncategorized — troyca @ 7:53 pm

I saw this video at David Appell’s blog, as well as Eli Rabbett’s.  For full disclosure, I think that a < 2C ECS is as likely as not, as my recent paper suggests.  Furthermore, I previously had a paper published noting the sensitivity of a previous cloud feedback paper by Dr. Dessler’s to dataset choice.

While I think the beginning of video is a fair introduction to the concept of sensitivity and feedback, there basically seem to be 3 arguments that the video makes as to why sensitivity is unlikely to be < 2C, and I do not find them particularly convincing:

## 1) "From data, you can get that f is about 0.6"

Obviously, there is a lot of "data", and a lot of different methods to interpret this data, all which give different estimates for ECS (and hence f). In this case, the reference is to Dessler 2013, J Climate, which is not only just one dataset/method combination, but also one that is ill-suited for the purpose of determining ECS.  Note the following huge caveat of the D13 paper:

Second, the differences in the feedbacks between the control model runs and the A1B model runs stress that one should be careful in applying conclusions about the feedbacks derived from internal variability to longer-term climate change. [my bold]

Essentially, this caveat is necessary to pass review because when the method employed by D13 (using ENSO-induced flux variations to derive sensitivity) is applied to the models, it yields a "thermal damping rate" of -0.6 W/m^2/K, which corresponds to a sensitivity of 6.2 K.  Compare that to the more relevant long-term sensitivity calculated from the A1B ensemble of 2.93 K (thermal damping of -1.26 W/m^2/K), and it’s clear that the D13 paper shows that its own method overestimated the sensitivity by a factor of more than 2.  Essentially, if the method fails to adequately diagnose the ECS of the models when it is applied to that data, we’re not likely to have confidence that the method can diagnose the real-world sensitivity.

Several other papers have noted the shortcomings in applying the ENSO-induced inter-annual fluctuations to calculating climate-scale feedbacks, and it is a topic I have spent a good amount of time on this blog discussing.  For example, Colman and Hanson (2012):

A comparison is also made of model feedbacks with reanalysis derived feedbacks for seasonal and interannual timescales. No strong relationships between individual modelled feedbacks at different timescales are evident: i.e., strong feedbacks in models at variability timescales do not in general predict strong climate change feedback, with the possible exception of seasonal timescales.  [D13 uses interannual timescales].

This method is essentially the same approach used  by Forster and Gregory (2006), which estimated an ECS of 1.6 K.  They obtained different values by using ERBE (rather than CERES) and a different time period, which should give an indication that the method is not particularly robust.  It is also quite sensitive to various methodological choices (using tropospheric rather than surface temps, using lead/lags), which can yield highly different results in sensitivity (as low as 0.6 K in the Lindzen and Choi, 2011case), but none of them seem to capture the ECS when applied to models, nor do any offer compelling arguments why these choices should make such a huge difference if indeed they are estimates of ECS.  Lest you think I am picking only on "higher sensitivity" results for this method, I have noted similar shortcomings up the Lindzen and Choi (2011) and more recently the Bjornbom discussion paper

A while back, I submitted a paper using this type of approach (using satellite fluxes and temperatures to estimate ECS), and several reviewers pointed out that there is little evidence or validation suggesting that these short-term variations in globally-averaged quantities give an indication of longer-term sensitivity.  After doing a good amount more research into the topic, I was inclined to agree.  I would think a similar point was made by reviewers of the D13 paper, and hence the strong caveat mentioned above.  What likely happens is that ENSO produces localized warming and feedbacks at different times during the evolution of a single phase, which means that when it is averaged together in a global quantity it says very little about a long-term response.

Consider the hubbub that has been made about the difference between ECS and "effective sensitivity", the latter of which was calculated from 50-150 years of data, based on the evolution of spatial warming over time [Armour et al. (2012)].  If we are going to note the deficiency of a century for calculating ECS, it is difficult to see how the inter-annual response will reveal much about ECS.

## 2) "…getting a much lower climate sensitivity, say below 2 degrees Celsius, would require a strongly negative cloud feedback."

I disagree!  A strong negative cloud feedback is not required to end up with a sensitivity of < 2K.  Generally, even a neutral cloud feedback will do the trick.  For instance, if you look at Soden and Held (2006), 7 of the 12 models for which cloud feedback is presented would have a < 2K sensitivity in the absence of a cloud feedback.  If the Planck response is ~ -3.2 W/m^2/K, the combined Water Vapor + Lapse Rate (better constrained than each individually) is ~ 1.0  W/m^2/K, and the surface albedo feedback ~ 0.3 W/m^2/K., this corresponds to a thermal damping of -1.9 W/m^2/K with no cloud feedback, or a sensitivity of 1.95 K.  Basically, if one was 50-50 on whether cloud feedback was positive or negative, I would say a < 2K sensitivity is more likely than not (albeit barely).  Even a slightly negative cloud feedback   (~ -0.3 K) would almost ensure a ECS < 2K.

## 3) "…because we don’t know the forcing, I don’t look at the estimates of climate sensitivities from these studies [using 20th century observations] to be very meaningful."

The dismissal of essentially all recent estimates of ECS from investigation into the 20th-21st century climate based on uncertainty in forcing is highly dubious, in my opinion.  First of all, almost all of these studies explicitly take into account the uncertainties in forcing.  In fact, my paper shows that even in spite of the uncertainty in forcing, the method used gives a good indication of longer-term sensitivity as tested on the CMIP5 models.  These methods have the additional advantage of deriving climate-scale feedbacks, something lacking in the methodology of D13 (as admitted in the paper).  Moreover, updated estimates for aerosols suggest that previously these effects were over-estimated…so reworking my paper with smaller impacts would actually lower the estimate of ECS.

If we are dismissing methods based on uncertainty, there should be little made of paleo estimates, all of which have even greater uncertainty in the forcing, along will large uncertainties in the temperature changes.  The argument of "we have uncertainty, therefore we know nothing" is an oft-criticized argument (and I agree with the criticism), because everything has uncertainty.  But when this uncertainty is explicitly (and correctly) taken into account, there is no reason to discard the estimates.

## Conclusion

Essentially, even if one weighed all the estimates using inter-annual flux variations with temperature changes (among which D13 is one estimate), it would be difficult to say that < 2.0 K ECS is "unlikely".  If one took into consideration the 20th-21st century estimates as well, it is near impossible to call it "unlikely".  Obviously, science is not democratic and some papers are better than others…but it is only by specifically focusing on the D13 results (using a method that seems to fail validation tests) while discarding the 20th century energy balance observations (a method which "passes" similar validation tests) that the video claims a < 2K sensitivity is unlikely.

## July 13, 2013

### What does Balmaseda et al. 2013 say about the “missing heat”?

Filed under: Uncategorized — troyca @ 12:27 pm

There seems to be a good amount of confusion about what the "missing heat" refers to, as well as what implications the Balmaseda et al (2013) paper has for this missing heat.  So first, here is my quick summary:

In many ocean datasets, there seems to be a discrepancy in the rate of ocean heat uptake up to the mid 2000s, and the uptake afterwards, with this more recent rate of uptake appearing to be smaller.  However, neither models nor satellites show a decrease in the TOA imbalance (as the increased forcing should only exacerbate this imbalance).  The "missing heat" means that theoretically, there should be more heat going into the ocean (that is, at the same rate as before).  The other possibility is that there was "extra heat" before – that is, the discrepancy in the ocean heat uptake is an illusion, and that prior calculations were overestimating this heat uptake.  This latter assumption would imply that the GCMs are generally overestimating the TOA imbalance.

Now let’s look at this within the context of Balmaseda et al., (2013):

Indeed, you can see that even in the purple line, the slope in the early part of the 2000s is larger than that of the later part of the decade.  After digitizing this and calculating the TOA imbalance, I get 1.23 W/m^2 for the first part of the decade (2000-2004), and 0.38 W/m^2 for the last half (2005-2009).  That is a huge discrepancy.  So let’s see if we see something similar in the satellites(CERES SSF1 degree net TOA imbalance annual anomaly):

A drop of ~ 0.85 W/m^2 should be quite obvious in this graph, but there is no sign of that at all.  Rather, the average imbalance from 2005-2009 is about 0.17 W/m^2 *larger* than that from 2000-2004.  If we called the combined discrepancy of ~ 1 W/m^2 imbalance for 5 years the "missing heat", we are talking somewhere around 8 * 10^22 J

Note that we are only using the satellites in this context in terms of *relative* TOA imbalance.  Some are under the misconception that we are able to measure the absolute TOA imbalance via satellite, and that using ocean heat content is just a secondary check (that is, we know that the extra heat is somewhere in the earth system from satellites, and just need to look harder to find it in the ocean).  This view is incorrect.  As explained in Stephens et al., 2012:

The combined uncertainty on the net TOA flux determined from CERES is ±4  Wm–2 (95% confidence) due largely to instrument calibration errors.  Thus the sum of current satellite-derived fluxes cannot determine the net TOA radiation imbalance with the accuracy needed to track such small imbalances associated with forced climate change

In other words, in terms of determining the absolute energy budget, ocean heat content is really the only game in town.  Satellites instead provide a check in terms of the evolution of this ocean heat content data, and have managed to raise a red flag in the ocean heat uptake slowdown.  In this much, I don’t see that Balmaseda et al. 2013 hasn’t really solved the case of the "missing heat", as we still see a large unexplained discrepancy in the rate of ocean heat uptake.  While surface winds and deep ocean heat warming may help explain a pause in surface warming, it does not explain this lower implied TOA imbalance.

Loeb et al. 2012 essentially “solved” this problem by noting the large uncertainties in the ocean heat datasets, which implied that the apparent discrepancy was likely an artifact of inaccurate ocean measurements.  Given the vastly larger coverage of ARGO from 2005 on, and the fact that these estimates range from implied TOA imbalances of ~ 0.38 W/m^2 to 0.6 W/m^2 (von Shuckmann and Le Traon, 2011; Stephens et al, 2012; Hansen et al, 2011,  Masters 2013), and that we actually see a slight increase in TOA imbalance towards the later part of the CERES decade, I suspect that the huge rate of heating in the early part of the decade in Balmaseda et al. 2013 may be an artifact as well.  Such would imply that the B13 primary published ocean warming of 1.19 +/- 0.11 W/m^2 in the 2000s  (0.84 W/m^2 globally) may be 1.5x to 3x too high.

In the next post, I hope to look at this last point in some more depth , and at "missing heat" in terms of TOA imbalance relative to various GCMs.

## May 21, 2013

### Another “reconstruction” of underlying temperatures from 1979-2012

Filed under: Uncategorized — troyca @ 7:56 am

Or, “could the multiple regression approach detect a recent pause in warming, part 4”.  For those following the series, you know what I mean by “underlying temperatures” is the temperature evolution if we attempted to remove the influence of solar, volcanic, and ENSO variations.

It has been a while since I posted the first three parts of a series on whether using multiple linear regressions to remove the solar, volcanic, and ENSO effects from temperature was an accurate way to "reconstruct" the underlying trend. Generally, these did not perform too well, and tended to overestimate the solar influence and underestimate the volcanic influence, particularly if there was indeed a "slowdown" in the underlying temperature data.  One of the problems with that method is that it includes an assumption about the form of the underlying trend when doing the regressions.

So, I’d thought I’d put a temperature series (actually, a couple of options) out there that have been adjusted for these factors, using a method that is not particularly sensitive to the form of the underlying trend.  Essentially, I take the multi-model mean of the models I used in the last post in this series to adjust for the volcanic and solar components, and then remove ENSO based on a regression against that adjusted series.  Fortunately, the ENSO variations are high enough frequency that the regression is not particularly sensitive to form of the the underlying trend (whether it be linear or quadratic) as we have limited the number of variables.

It should be noted that this method might *over-adjust* for volcanic and solar if the CMIP5 models are too sensitive, which my recent paper (Masters 2013, Climate Dynamics) seems to indicate.  I have therefore included an adjusted series that adjusts by only 50% of the MMM as well.  Since the difference between the sensitivies in the transient state are likely to be less than after equilibration, let’s say the "true" adjustment should lie somewhere in-between those two adjustments.

Anyhow, here is the reconstructed series of NCDC (NOAA) temperatures.  (On a side note, I have become a little annoyed with trying to grab data from HadCRUT4 and GISS.  The former seems to return a "Not Found" error quite frequently, and the latter doesn’t let the R default user-agent grab data at all.  Hence the usage of NOAA temperatures).

If I were to go strictly by the eyeball test, the blue line (adjusted by 50% of MMM) seems to get it “most right” in terms of compensating for the volcanic eruptions without over-adjusting.  Below are the trends for the various start years ending in 2012 in these series:

Here you’ll note that the “adjusted” series actually results in a lower trend for all start years up until about 2001, when the influence of ENSO seems to really take over.  The blue line never dips below 0 for these adjusted trends of 10 years or longer, so one could argue that the underlying warming (if the blue line indeed captures this correctly) never really “stopped”.  On the other hand, the trends are substantially lower towards the end than they are at the beginning (and indeed smaller than in most model runs), so saying that the recent “slowdown” is simply the result of known natural factors rings a bit hollow to me.  It would be interesting to run a similar experiment on the CMIP5 model runs and see how much “natural” variation remains in those runs, of if this is something unique to the real world.

## May 16, 2013

### Our comment on Humlum et al. in press at Global and Planetary Change

Filed under: Uncategorized — troyca @ 7:58 am

It is available online at http://www.sciencedirect.com/science/article/pii/S0921818113000891 (actually, it has been for a while, but I haven’t had much time for blogging lately!).   For those of you who have already read this blog post, or the one at RealClimate by Rasmus, the contents should not be much of a shock.  Sadly, the article is pay-walled, but if you don’t have university access contact me and I can see about getting you a copy.  The code for the paper is available here.

Interestingly, I see that Mark Richardson of the University of Reading also has a comment on the Humlum et al. paper appearing online at Global and Planetary Change as well.

## April 16, 2013

### Sensitivity / CMIP5 comparison paper now in press at Climate Dynamics

Filed under: Uncategorized — troyca @ 8:24 pm

It is available online and titled “Observational Estimate of Climate Sensitivity from Changes in the Rate of Ocean Heat Uptake and Comparison to CMIP5 Models”.  Apparently Nic Lewis’s paper beat mine to online release by a day, and though my estimated confidence interval for equilibrium sensitivity is significantly wider, the median sensitivity in my paper also tends to be on the lower end relative to the IPCC AR4 likely value.  It is pay-walled, but please contact me if you need a copy and do not have University access.  Anyhow, a zip that includes all my code and data is available here.  From the abstract:

Climate sensitivity is estimated based on 0-2000m ocean heat content (OHC) and surface temperature observations from the second half of the 20th century and first decade of the 21st century, using a simple energy balance model and the change in the rate of ocean heat uptake to determine the radiative restoration strength over this time period.  The relationship between this 30-50 year radiative restoration strength and longer term effective sensitivity is investigated using an ensemble of 32 model configurations from the Coupled Model Intercomparison Project phase 5 (CMIP5), suggesting a strong correlation between the two.  The mean radiative restoration strength over this period for the CMIP5 members examined is 1.16 Wm-2K-1,  compared to 2.05 Wm-2K-1 from the observations.  This suggests that temperature in these CMIP5 models may be too sensitive to perturbations in radiative forcing, although this depends on the actual magnitude of the anthropogenic aerosol forcing in the modern period.  The potential change in the radiative restoration strength over longer timescales is also considered, resulting in a likely (67%) range of 1.5 K to 2.9 K for equilibrium climate sensitivity, and a 90% confidence interval of 1.2 K to 5.1 K.

To explain further on what I consider to be three of the more important conclusions of the paper:

First, there seems to be a relationship between the estimates of effective sensitivity from the last 30-50 years and the longer-term multi-century effective sensitivity (which is arguably more important than equilibrium sensitivity) as they are calculated in models.  To me, this gives hope that as the length of satellite record increases, we might start to narrow down a more accurate value for sensitivity that is relevant to the timescales of greatest interest.

Second, most of the CMIP5 models seem (albeit not without caveats) to show too high of sensitivity over this period.  From figure 3 of the paper:

This is showing the radiative restoration strength in the CMIP5 models examined (each X is a different run from that model), which is generally inversely related to sensitivity.  The solid gray line represents the likely value from observations, and the dashed lines represent +/- one standard deviation.  As can be seen, the vast majority of these runs fall below the observational likely value for radiative restoration strength, suggesting these CMIP5 models likely have too high a sensitivity relative to the observations.  Interestingly, inmcm4 and MRI-CGCM3 are both well above the line, and while they are among the CMIP5 models with the lowest sensitivity, they are not nearly as insensitive as the 50-yr radiative restoration strength would make them appear (which would be ~ 1.2 K for ECS if we performed a naïve calculation).  Obviously, the relationship between this radiative restoration strength and ECS can be complicated, as discussed previously at this blog and within the paper.

Finally, there is the estimate of ECS, for which I have tried to consider some effect of the potential change in Effective Sensitivity to ECS based on the CMIP3 relationships, although again I would argue that effective sensitivity is generally of more interest (but it is not the standard benchmark at this point).  Nonetheless, from figure 5 of the paper:

The gray indicates the pdf of “ECS” if we keep the radiative restoration strength fixed after the 50-year observational period, whereas the black line indicates the pdf for ECS if we take the uncertainty in the T_eff/T_eq into account based on this relationship in CMIP3 models.  The latter is reported in the upper right box and in the abstract.  The orange and purple lines represent the “likely” values for sensitivity when switching in the JAMSTEC or CSIRO OHC data rather using that from NOAA.

Clearly, the median estimate for ECS of 1.98K seems to match some other observationally-based estimates with a lower sensitivity, and the “likely” (67%) range of 1.5K to 2.9K is on the lower end as well.  Unfortunately, due to the large uncertainties in 0-2000m OHC data earlier in the record, this method continues to yield large uncertainties at the extremes, which due to the inverse relationship between sensitivity and the radiative restoration tends to increase the higher end of the range much more than the lower end.  Hence the 90% interval of 1.2K to 5.1K is not a particularly strong constraint.

## February 23, 2013

### Could the multiple regression approach detect a recent pause in global warming? Part 3.

Filed under: Uncategorized — troyca @ 11:29 am

In the first two parts of this series, I demonstrated how multiple regression methods that assume an underlying linear "signal" are unable to properly reconstruct a pause in surface temperature warming when attempting to remove the volcanic, solar, and ENSO components from my simple energy balance model.  That is, for an approach similar to Foster and Rahmstorf (2011), the method will tend to underestimate the warming influence of volcanic recovery and overestimate the cooling influence of solar activity over recent decades to compensate for the pause.  With the improvement Kevin C mentioned, there is some ability to detect a longer tail for the volcanic recovery (indeed, it does so nearly perfectly if the underlying signal is actually linear), and the solar influence is no longer over-estimated.  Unfortunately, it still underestimates the recent warming influence from volcanic recovery in my energy balance model in the "flattening" scenario 2.

I had thus wondered whether this long-tailed volcanic recovery was merely an artifact of my simple model, or indeed may have contributed substantial warming from 1996 (when the Pinatubo stratospheric aerosols were virtually gone) onward.  There are not that many models that have contributed volcanic-only experiments to CMIP5 (I showed 1 in my part 1, and Gavin showed an ensemble for GISS-EH at RealClimate in response to this discussion).  However, there is plenty of data from the natural-forcing only historical experiment, which, by averaging several of the runs for a particular model, can give us a good idea of the forced volcanic + solar influence in those GCMS.

In the figure below, I have shown the mean of the historicalNat runs for 7 individual CMIP models that have 4 or more of these experiment runs.  As such, this should give an idea of the forced response in these models without much additional unforced variation.  I have also plotted on the same chart the volcanic + solar influence as diagnosed by the FR11 and Kevin C methods when using the HadCRUTv4 dataset.

As can be seen, the volcanic response in all of these AOGCMs is far larger and has a longer tail than diagnosed by the multiple regression methods.  Now, certainly it is possible that these volcanic responses in AOGCMs are too large, as there is evidence to suggest that the CMIP5 runs don’t properly simulate this response.  However, the fact that the FR method shows far lower sensitivity to volcanoes while simultaneously showing a much larger sensitivity to solar influences than both GCMs and simple energy balance models would indicate would seem to suggest that it may be compensation for the recent flattening.  Indeed, it is quite difficult to conceive of a realistic, physics-based model that does not indicate a substantial volcanic-recovery-induced warming contribution after 1996, despite it being virtually non-existent in the FR11 diagnosis (the increase around 1998 in the FR line is actually solar-induced).

The table below highlights the warming contribution of the model ensembles (in K/Century, so be careful!) from the indicated start year through 2012 (I have an * by CCSM4 because the runs end in 2005).

For comparison, the HadCRUTv4 trends over these same periods are

1979-2012: 1.55 K/Century
1996-2012: 0.91 K/Century
2000-2012: 0.38 K/Century

If one believes that this range of GCMs represent the true forced response of solar+volcanic, it would suggest that these natural forcings were responsible for 15% to 51% of the warming trend from 1979-2012.  If I had to bet, I would probably put it on the lower end, as the AOGCMs appear to be a bit too sensitive to these radiative perturbations and suggest too much ocean heat uptake, which probably creates longer tails on the early volcanic eruptions than is warranted.  However, I do think the contribution is probably greater than 0%, which is about what the FR method puts it at.

From 1996 to present, and 2000 to present, however, are where I think we see the larger misdiagnosis.  Whereas all models (including my simple energy balance model) indicate that the solar+volcanic influence from 1996 to present was positive, comparable in amount (median: 0.81 K/century, mean: 1.05 K/century) to the actual HadCRUT trend, both regression methods either suggest a slight negative or nor-zero influence from these components.  And from 2000 to present,  while the models are more split (with only 6 of the 7 suggesting a positive influence, and the range varying more widely), it is difficult to believe that the actual influence of solar+volcanic is as strongly negative as the FR method indicates.  This is why it looks to me like the multiple regression method underplays the influence of volcanic recovery in order to partly compensate for a recent pause.

Essentially, we are left wondering if the GCMs are too sensitive to volcanic eruptions, and/or if the multiple regression method is underestimating their influence to compensate for a recent pause.  Again, if I had to bet, it would probably be in the middle – the GCM response is generally a bit too large, but the response is not nearly as small (or short) as the FR11 method would indicate.

## February 20, 2013

### Could the multiple regression approach detect a recent pause in global warming? Part 2.

Filed under: Uncategorized — troyca @ 8:36 pm

Previously, I posted on the multiple regression method – in particular, the method employed in Foster and  Rahmstorf (2011) – and how, when attempting to decompose the temperature evolution of my simple energy balance model into the various components (signal, ENSO, solar, and volcanic), this method encountered two large issues:

1) It did not adequately identify the longer term effect of the volcanic recovery on temperature trends, and

2) It largely overestimated the solar influence.

If you recall, I tested two scenarios in that original post.  The first scenario was a linearly increasing underlying signal.  The second scenario was a combination of a linearly increasing signal and an underlying low-frequency oscillation, resulting in a flattening of recent temperatures (one that was not caused by the combination of ENSO, volcanic, and solar influences).  The goal was to see whether this multiple regression method could identify the flattening if it existed.

Thanks to Kevin C, who  suggested and implemented a few improvements to this F&R method, noting them in the comments of that post: “…tie the volcanoes and solar together as forcings and fit a single exponential response term instead of a delay."  This would allow a tail for the recovery from volcanic eruptions well beyond the removal of that actual stratospheric aerosols, and would not allow an over-fitting of the solar influence.  After implementing this newer method, I would say that it is a large improvement (at least in diagnosing my simple EBM components) in the first scenario of a linearly increasing trend:

Unfortunately, due to the underlying assumption implicit in this method of a linear trend, it still has trouble identifying the recent pause present in scenario 2:

To see where exactly it is going wrong in scenario 2 vs. scenario 1, we can again look at the individual components:

As should be clear, the improvements suggested by Kevin C generally improve performance across the board.  Unfortunately, in the 2nd scenario with the flattening, the multiple regression method still tries to compensate for the flattening by decreasing the diagnosed influence of volcanic recovery, therefore leading to a misdiagnosis.

Dikran Marsupial noted in the comments of that last post that “there are no free lunches.”  Perhaps this helps drive the point home that assuming an underlying linear trend will lead to this misdiagnosis if the increase is not linear.  I hope to investigate further the actual influence of solar + volcanic activity on recent temperatures using some GCM runs.

## February 13, 2013

### Our paper on UHI in USHCN is now published

Filed under: Uncategorized — troyca @ 4:56 pm

As you know, my first interest and the bulk of the early articles for this blog dealt with the question of the urban heat island (UHI) influence on U.S. historical temperatures.  Our paper is now available (pre-print version) on this topic, and Zeke (the lead author of the paper and the one who wrangled everyone together!) and Matthew Menne put together a good post on it over at realclimate.

Apart from the use of several different proxies for urbanization, and the thorough treatment of many UHI-related topics, I personally think an interesting aspect of this paper is how it  delves into the potential issue of “urban bleeding” during homogenization.  For those that have followed various discussions on the topic over the past few years, or have read this paper already, it is clear that the UHI signal appears much more strongly in the TOB data than in the F52 homogenized data.  A while back I also had a post, using synthetic data, that showed how the F52 algorithm could potentially alias some of the heat from urban stations into rural stations, thereby removing the appearance of UHI without removing the UHI itself.

On the one hand, if you look at figure 9 in the paper, I think it confirms the concern that the homogenization process could potentially spread urban warming to rural stations, as seen in the urban only adjustments.  On the other hand, I also think it shows that in the case of USHCN v2, this effect is pretty minor based on using only ISA < 10% for adjustments.  Now, one might wonder about UHI spreading from stations with ISA < 10% (that is, whether these “rural” stations are not strictly “rural”, and are themselves are contaminated by the UHI).  Thus, I thought it might be interesting to show another couple of figures here, which shows the difference in the “urban” vs. “rural” trends based on what cut-off in the ISA classification is used to define “rural”:

As you can see, the bulk of the UHI signal in TOB comes from those stations with ISA > 10%, such that the use of 10% seems a pretty solid cut-off for “rural”.

Nevertheless, for an additional demonstration, we can use only the most rural stations (< 1% ISA) from a dataset that has only been adjusted by other most rural stations (< 1% ISA).  Here is that final result when compared against the gridded F52 all-adjusted, as well as GISS:

From a visual perspective, it seems fairly clear to me that there is not much difference, and the numerical results below seem to bear this out for the most part.  The exception is one we discussed in the paper, where the USHCN v2 all may have some residual UHI in the early part of the record and require an additional adjustment (as the one used in GISS).

1960-2010 Trends

1885-2012 Trends