Troy's Scratchpad

September 2, 2013

How much more warming would we get if the world stopped emissions right now? Dependence on sensitivity and aerosol forcing.

Filed under: Uncategorized — troyca @ 11:34 am

The question of how much more warming is "in the pipeline" or we are "committed to", if we stopped emissions immediately, is an interesting but not quite trivial one to answer.  Sure, there are GCMs that can run this scenario, but given their seeming inability to correctly reproduce the effective sensitivity relevant on these timescales (Masters, 2013) or the magnitude of the aerosol forcing (as appears to be the case in AR5 drafts), it would be nice to see how these factors influence the amount of warming we could expect under this scenario. 

As part of a larger project, I have been developing some much simpler models that simulate the properties of the CMIP5 multi-model-mean under the various RCP scenarios up to the year 2100, but with the ability to enter specific values for parameters like effective sensitivity, current aerosol forcing, and emissions trajectories that have the largest impact on the warming.  I thought this scenario would be an interesting test for this portion of it (you might notice my R code in this case has the distinctive feel that it has been ported from a different language).

For the first part, we need to know how the temperature responds to forcings.  Obviously, one parameter we will take into account is "effective sensitivity", or rather its inverse in the form of radiative restoration strength – this is the value provided by the "user" or the model.  While the radiative restoration strength is not necessarily a constant, as discussed in previous posts, we want to take a value that is relevant on the century scale.  However, even if the user prescribes this value, we still need to know how fast the temperature will approach this value for a given forcing evolution.  For that part, I attempted to "reverse engineer" the values for the CMIP5 multi-model mean (which is itself a hodge-podge of various runs from various models), by using 1.18 W/m^2/K for radiative restoration (3.1K Effective Sensivity), as I found in my paper, and proceeded to fit the remaining parameters based on the adjusted forcings and temperature evolution of the 4 RCP scenarios as shown in Forster et al. (2013).  While a one-box model did not quite give a great fit across the various scenarios, using a 2-layer model as described in Geoffroy et al. (2013)  seems to do the trick.

For the second part, we need to know the actual forcing evolution.  This too requires some reverse engineering and simplifying assumptions.  In this case, since we are stopping emissions immediately, those species with atmospheric lifetimes << 1 year can see their forcings drop to 0 immediately (aerosols, O3).  For other species, I assumed a constant "effective atmospheric lifetime" and atmospheric fraction, again fitting these values based on the RCP scenarios.  This yielded an effective lifetime of 12 years for CH4 and 185 years for CO2.  Since N20 and the main contributing CFC have similarly long lifespans, I simply lumped these smaller effects in with CO2 by slightly increasing the CO2 forcing for a given concentration (again, I don’t want my "simple" model to require the user to enter 10 different emissions trajectories!). 

Here are the resulting forcing evolutions for the major greenhouse gases were emissions to stop today:

NoEmissionsCH4F

NoEmissionsCO2

As you can see, due to the longer lifetime of CO2, the decrease in forcing is rather slow over time.  On the other hand, methane drops off rather quickly.  Contrary to the GHG forcing, we would likely see a large jump *up* in forcing from aerosols if we stopped emitting immediately…the magnitude of that jump up obviously depends on the magnitude of the current aerosol negative forcing.

Below are the temperature evolutions for different values of effective sensitivity and aerosol forcing.  For effective sensitivity, I will show the value of 1.8K (as found in my paper), as well as 3.1 K (as I found for the CMIP5 models).  For the aerosol forcing, I will use the values of -1.3 W/m^2 (AR4) and -0.9 W/m^2 (last draft of AR5).  Note that this additional warming chart is only taking into account anthropogenic forcings…it does not include the warming or cooling influences that may come from natural forcings in the future.  

NoEmissionsT1

As you can see, all of them have a "bump" within the next 10 years, which represents the immediate increase in forcing due to the drop-off of aerosols.  After that, the temperatures begin to drop as the methane concentrations and CO2 concentrations begin to decrease.  However, the magnitudes are very different!  In these scenarios, choosing different values of sensitivity and aerosol forcing (from what I consider realistically possible values), can mean the difference between 0.6 K and 0.2K warming in the pipeline.  To further highlight this dependence, I will show the more "extreme" values based on the edges of uncertainty of sensitivity and aerosols:

NoEmissionsT2

Code and data.

11 Comments »

  1. Since I formed the impression that you may actually be for real, I have several times attempted to use your site as my can opener into the sensitivity issue. But my lamentably short attention span has heretofore prevented me from overcoming the momentary setbacks caused, for instance, by your early posts’ broken links to on-line data. So I have seized upon this bite-sized post as one that I may actually be able to manage, and I hope you will forgive me for jumping in the middle with a rookie question.

    I have reviewed the code and satisfied myself that, as you no doubt already knew, the poor short-term performance that afflicts its quick and dirty implementation of a “two box” model does not detract significantly from its results for the particular problem you’ve assigned it. But, despite the runTwoBox() comments’ reference to making its output suitable for regression, and despite your statement above that you “proceeded to fit the remaining parameters based on the adjusted forcings and temperature evolution of the 4 RCP scenarios,” the layer-depth and inter-layer conductivity parameters c1Depth, c2Depth, and layerTransfer seem to have dropped into the code like manna from heaven; it was apparently not in this code that you performed the above-mentioned regression.

    Presumably, that is, you had already regressed your two-box model’s temperature outputs for various RCP forcings and combinations of c1Depth, c2Depth, and layerTransfer against “temperature evolutions” associated with respective RCP scenarios. And on the web I have been been able to find things called RCP scenarios and was able at this location: http://tntcat.iiasa.ac.at:8787/RcpDb/dsd?Action=htmlpage&page=download to locate their concentration and forcing data. But I assume that the “temperature evolution” to which you refer is a set of responses of somebody’s models to respective forcing sets that those data represent, and I haven’t found those responses.

    I fully understand if you consider it a waste of time thus to humor a dilettante, but, in case you have the patience–and I have not misunderstood what you meant by “fit”–I would appreciate a pointer to those RCP-scenario temperature data.

    Comment by Joe Born — September 4, 2013 @ 6:03 am

    • Joe,

      You are correct, I fitted those parameters based on the forcing and temperature data of the multi-model-mean for each of the scenarios. That forcing and temperature data I used is in the Forster et al., 2013 paper I linked to…specifically, figures 2 (for the RCP scenarios) and figure 1 (for historical). My own attempt to digitize those figures can be found at https://dl.dropboxusercontent.com/u/9160367/Climate/CMIP5_Data/RCP_MMM.txt. Regarding “fitting” the models, I am not aware of an analytical solution that would allow a simple regression to find the ideal parameters across all four scenarios for that 2-layer model, so I used a rather ugly iterative brute force method to minimize the sum of squares differences to find those fits.

      Comment by troyca — September 4, 2013 @ 7:57 am

      • Thanks a lot.

        I assume that in fitting you used lambda = 2.08, i.e., assumed an effective sensitivity of 3.1 K, as opposed to lambda = 1.18 for a sensitivity of 1.8 K?

        Comment by Joe Born — September 4, 2013 @ 1:05 pm

      • Oops! What I should have said was:”I assume that in fitting you used lambda = 1.18, i.e., assumed an effective sensitivity of 3.1 K, as opposed to lambda = 2.08 for a sensitivity of 1.8 K?”

        Comment by Joe Born — September 4, 2013 @ 1:48 pm

      • Yup, your second comment is correct.

        Comment by troyca — September 5, 2013 @ 7:21 am

  2. Not that it matters, but the parameters I’ve come up with are 57, 347, and 0.7 instead of 95, 500, and 0.4. (I used lambda = 1.18, which I’m assuming is what you did.) By my measure the resultant fit gives 12% less error, little enough that you can’t tell by eyeball which fit is better–and your mileage may vary since I used a two-box routine different from yours. (I adapted to vector responses the approach used by Willis Eschenbach and Paul_K here: http://wattsupwiththat.com/2012/05/31/a-longer-look-at-climate-sensitivity, although I changed it a little because I was unable to convince myself that the way they used it doesn’t add a time offset. Anyway, rather than use a difference equation, they in essence determine the system’s step response and then treat the stimulus (“forcing”) as a sequence of differently delayed steps so that the response can be calculated as the sum of a number of differently delayed step responses. That’s what I do, too.)

    Comment by Joe Born — September 4, 2013 @ 7:15 pm

    • Interesting…it does not surprise me that you could get a slightly better fit, considering the rather ad-hoc nature I used to find one. I was not too concerned given that the error in my digitization of the figure data was probably comparable to the error of the fit.

      Comment by troyca — September 5, 2013 @ 7:22 am

  3. Troy,

    Very nice. I only note that a lot would seem to depend on the accuracy of atmospheric lifetime for CO2, which I suspect is somewhat overstated by the Bern model most people refer to because it does not properly handle the influence of thermohaline turnover on reducing atmospheric CO2. Did you explicitly include black carbon (soot) influence in the aerosol effect?

    Comment by Steve Fitzpatrick — September 9, 2013 @ 6:02 pm

    • Thanks Steve. I confess that I raised my eyebrows a bit at the 185 year effective lifetime for CO2 (seemed a bit longer than I’d thought), but I can’t say I know enough either way. I may run a few sensitivity tests to that parameter…for this particular scenario I would expect it to affect the more distant trajectory though rather than the peak height.

      I included the black carbon effect insomuch as this effect is included in the IPCC aerosol estimates. To this extent, the assumption would be that the effect would disappear the first year of stopped emissions.

      Comment by troyca — September 10, 2013 @ 8:48 am

  4. […] 2013/09/02: TMasters: How much more warming would we get if the world stopped emissions right now? D… […]

    Pingback by Another Week of Climate Instability News, September 8, 2013 – A Few Things Ill Considered — September 10, 2013 @ 4:07 am

  5. […] I will be employing the same two-layer model as in my last post.  Since EFS is determined by λ, it is easy to prescribe in this model.  However, the TCR […]

    Pingback by Relative importance of transient, effective, and equilibrium sensitivities | Troy's Scratchpad — October 11, 2013 @ 8:16 pm


RSS feed for comments on this post. TrackBack URI

Leave a comment

Blog at WordPress.com.