Troy's Scratchpad

February 23, 2013

Could the multiple regression approach detect a recent pause in global warming? Part 3.

Filed under: Uncategorized — troyca @ 11:29 am

Part 1

Part 2

In the first two parts of this series, I demonstrated how multiple regression methods that assume an underlying linear "signal" are unable to properly reconstruct a pause in surface temperature warming when attempting to remove the volcanic, solar, and ENSO components from my simple energy balance model.  That is, for an approach similar to Foster and Rahmstorf (2011), the method will tend to underestimate the warming influence of volcanic recovery and overestimate the cooling influence of solar activity over recent decades to compensate for the pause.  With the improvement Kevin C mentioned, there is some ability to detect a longer tail for the volcanic recovery (indeed, it does so nearly perfectly if the underlying signal is actually linear), and the solar influence is no longer over-estimated.  Unfortunately, it still underestimates the recent warming influence from volcanic recovery in my energy balance model in the "flattening" scenario 2.

I had thus wondered whether this long-tailed volcanic recovery was merely an artifact of my simple model, or indeed may have contributed substantial warming from 1996 (when the Pinatubo stratospheric aerosols were virtually gone) onward.  There are not that many models that have contributed volcanic-only experiments to CMIP5 (I showed 1 in my part 1, and Gavin showed an ensemble for GISS-EH at RealClimate in response to this discussion).  However, there is plenty of data from the natural-forcing only historical experiment, which, by averaging several of the runs for a particular model, can give us a good idea of the forced volcanic + solar influence in those GCMS.

In the figure below, I have shown the mean of the historicalNat runs for 7 individual CMIP models that have 4 or more of these experiment runs.  As such, this should give an idea of the forced response in these models without much additional unforced variation.  I have also plotted on the same chart the volcanic + solar influence as diagnosed by the FR11 and Kevin C methods when using the HadCRUTv4 dataset. 

VolcanicPlusSolar

As can be seen, the volcanic response in all of these AOGCMs is far larger and has a longer tail than diagnosed by the multiple regression methods.  Now, certainly it is possible that these volcanic responses in AOGCMs are too large, as there is evidence to suggest that the CMIP5 runs don’t properly simulate this response.  However, the fact that the FR method shows far lower sensitivity to volcanoes while simultaneously showing a much larger sensitivity to solar influences than both GCMs and simple energy balance models would indicate would seem to suggest that it may be compensation for the recent flattening.  Indeed, it is quite difficult to conceive of a realistic, physics-based model that does not indicate a substantial volcanic-recovery-induced warming contribution after 1996, despite it being virtually non-existent in the FR11 diagnosis (the increase around 1998 in the FR line is actually solar-induced).   

The table below highlights the warming contribution of the model ensembles (in K/Century, so be careful!) from the indicated start year through 2012 (I have an * by CCSM4 because the runs end in 2005).  

 

VolcanicPlusSolarTable     

For comparison, the HadCRUTv4 trends over these same periods are

1979-2012: 1.55 K/Century
1996-2012: 0.91 K/Century
2000-2012: 0.38 K/Century

If one believes that this range of GCMs represent the true forced response of solar+volcanic, it would suggest that these natural forcings were responsible for 15% to 51% of the warming trend from 1979-2012.  If I had to bet, I would probably put it on the lower end, as the AOGCMs appear to be a bit too sensitive to these radiative perturbations and suggest too much ocean heat uptake, which probably creates longer tails on the early volcanic eruptions than is warranted.  However, I do think the contribution is probably greater than 0%, which is about what the FR method puts it at.  

From 1996 to present, and 2000 to present, however, are where I think we see the larger misdiagnosis.  Whereas all models (including my simple energy balance model) indicate that the solar+volcanic influence from 1996 to present was positive, comparable in amount (median: 0.81 K/century, mean: 1.05 K/century) to the actual HadCRUT trend, both regression methods either suggest a slight negative or nor-zero influence from these components.  And from 2000 to present,  while the models are more split (with only 6 of the 7 suggesting a positive influence, and the range varying more widely), it is difficult to believe that the actual influence of solar+volcanic is as strongly negative as the FR method indicates.  This is why it looks to me like the multiple regression method underplays the influence of volcanic recovery in order to partly compensate for a recent pause.

Essentially, we are left wondering if the GCMs are too sensitive to volcanic eruptions, and/or if the multiple regression method is underestimating their influence to compensate for a recent pause.  Again, if I had to bet, it would probably be in the middle – the GCM response is generally a bit too large, but the response is not nearly as small (or short) as the FR11 method would indicate.   

Data (including all of the globally processed tas for the models shown, please give credit here if you use them in this processed form) and code are available.

29 Comments »

  1. Wow. Even if you exclude CCSM4 as an outlier the contribution to the trend since ’96 from volcanic recovery in the model ensemble is about the same as the observed trend. Here is a plot of post 96 observations along side your ensemble. The contradiction with the F&R multiple regression method could not be more stark. IMO one cannot suggest that the F&R method is consistent with models. I think the pre 1996 comparison between models and observations shows that models might be too sensitive to volcanoes as you suggest or perhaps that unaccounted for factors are offsetting the effect. I’ll plot that up when I get time a little later.

    A most fascinating series of posts and worthy of more attention and discussion.

    Comment by Layman Lurker — February 27, 2013 @ 9:31 am

    • Ok here is a plot showing Hadcrut4 from 1979 to 1995 detrended by 0.2 C per decade (no offset) alongside the ensemble. In my previous comment I put up the Hadcrut4 segment from 1996 to current. I offset that segment by -0.35C with no trend adjustment to align with the ensemble. Here are the two segments together alongside the ensemble. Therefore, the volcanic + solar only ensemble you have shown merely requires an offset of +0.35C and added trend of 0.2C per decade prior to 1996 for an impressive fit with observations.

      Comment by Layman Lurker — February 27, 2013 @ 7:52 pm

    • Correction on the detrending used prior to 1996. It was 0.24C per decade, not 0.2C

      Comment by Layman Lurker — February 28, 2013 @ 7:09 am

    • Thanks Layman, I’ve been slammed with my day job lately and have clearly been neglecting the blog and comments!

      Comment by troyca — March 29, 2013 @ 9:31 pm

  2. Troy,
    I note that people are continuing to cite F&R quite happily – including Hoskins recently in defense of the UK CCC .
    Have you thought about submitting a formal rebuttal paper?

    Comment by Paul_K — March 29, 2013 @ 4:05 am

    • Hi Paul,

      The short answer is that yes, we’ve certainly been thinking about a publication but so far my day job has prevented any real action in that direction. Moreover, I’ve had two other papers under review that I’d submitted months ago, so what little time I’ve had for climate science has been towards revisions at the expense of the blog and new work.

      Comment by troyca — March 29, 2013 @ 9:35 pm

      • On what subject?

        Comment by R — April 2, 2013 @ 12:25 am

      • R – “On what subject?”

        One of the papers was a response to Humlum et al., the other was on the sensitivity work I had done, both of which I have previously posted on (although obviously there has been modifications throughout the review process). I am pleased to say they have now both been accepted and I will be posting on them shortly (and perhaps even getting a chance to write this multivariate regression stuff up in the near future).

        Comment by troyca — April 9, 2013 @ 7:50 am

  3. Those volcanic trend numbers are consistent with what I’m seeing from Hansen 2011, from the 2-box model and from the Pinatubo papers I’ve glanced at.

    The papers referencing F&R are an odd bunch. There largest group are on trees, there are a couple of (presumably skeptic) papers from KK Tung who is keen on regression, and a couple from Rahmstorf. Benestat references it for the 30 year trend. One by a T Masters. Most interesting is a paper by van Hateren, which also picks up the anomalous solar response: http://link.springer.com/article/10.1007/s00382-012-1375-3

    The three tree papers I can read use the paper as a throwaway cite in the background section, to establish that global warming has been going on for around 30 years at a plausible rate.

    So my impression is that impact in the peer-reviewed literature is smaller than the citation count would suggest. Media response is a different matter. My own work was published at SkS, so I’ll rebut it there (I’ve already got health warnings on all the posts.)

    Comment by Kevin C — April 14, 2013 @ 1:31 pm

    • Interesting. Yup, I cited it when I suggested the high/fast solar response could have implications for the effective mixed layer heat capacity, although in hindsight it seems the response is bound to be over-estimated using this method. I still think it could be good to address the topic in the literature, and with those other papers finishing review I have some time opening up here.

      Comment by troyca — April 16, 2013 @ 6:48 am

  4. I have my own made up method of exponentially lagging a forcing. It’s basically a one-box model with the exception that I apply an exponent to the time variable. For example: exp(-t^(1/2)/tau) uses the square root of time. No physics that I can think of justifies this approach. I just like it as it will give a response that is at first faster and then slower than the usual one-box model.

    Here’s some code showing how I applied this method to calculate a NINO3.4 correction to temperature. It’s rather rough, but you should be able to decipher it if you so choose. Sorry for not localizing all my variables.

    ################ read in HadCRUT4 data
    
    eval_styr = 1880
    eval_enyr = 2012
    
    obs_df = read.table("http://climexp.knmi.nl/data/ihadcrut4110_ns_avga.txt",skip=1)
    names(obs_df) = c("t","anom")
    
    obs_df = subset(obs_df,t >= eval_styr & t < (eval_enyr+1))
    
    t = obs_df$t
    t2 = t^2
    
    obs_df$resid = resid(lm(obs_df$anom ~ t2 + t)) # quadratically detrend anomolies
    
    obs_df$resid = obs_df$resid - mean(obs_df$resid)
    
    ################ read in NINO3.4 data
    
    force_styr = 1870
    force_enyr = 2012
    
    
    force_df = read.table("http://climexp.knmi.nl/data/ihadisst1_nino3.4aa.txt",
                       skip=1)
    
    names(force_df) = c("t","anom")
    
    force_df = subset(force_df, t >= force_styr & t < (force_enyr+1))
    
    force_df$anom = force_df$anom - mean(force_df$anom)
    
    ### show lag temperature vs forcing lag
    
    x = intersect(force_df$t, obs_df$t)
    force = subset(force_df,t %in% x)$anom
    obs   = subset(obs_df,t %in% x)$resid
    
    windows()
    acf_before = ccf(obs, force, lag.max=12)
    acf_before
    
    # setup forcing vectors
    
    f_diff  = diff(force_df$anom)  # i.e. rate of change
    f_accum = force_df$anom[2:nrow(force_df)]
    f_t = force_df$t[2:nrow(force_df)]
    
    
    ### function to calculate forcing in the pipeline
    
    pipe_accum = function(t, tau, t_exp){
    
     n       = (t:1)/12
     rate_in = f_diff[1:t]
     fact    = (exp(-(n^t_exp)/tau))
     sum(rate_in * fact)
     
    }
    
    
    ### function to calculate forcing out of the pipeline (simply amount in - amount out)
    
    pipe_out = function(t, tau, ff, t_exp){ff * f_accum[t] - ff * pipe_accum(t, tau, t_exp)}
    
    
    ### function used by nls to find optimal tau, ff, and t_exp values.
    ### returns "correction" vector to be compared to detrended HadCRUT4 values
    
    find_parms = function(tau, ff, t_exp){
    
      # for each month, calculate forcing out of pipeline
      
      corr = mapply(pipe_out, mth, tau, ff, t_exp)
      
      corr_df = data.frame(t=f_t,corr=corr)
    
      corr_df = subset(corr_df,t>=eval_styr & t<(eval_enyr+1))
    
      (corr_df$corr = corr_df$corr - mean(corr_df$corr))  # return anomolies
      
    }
    
    ### calculate correction
    
    mth=1:length(f_diff)
    
    # set initial values for nls
    
    tau   = 3
    ff    = 1    # force factor
    t_exp = 1/2  # time exponent
    
    nlsmod = nls(resid ~ find_parms(tau, ff, t_exp), data=obs_df, start=list(tau=tau,ff=ff,t_exp=t_exp))
    summary(nlsmod)
    
    corr = predict(nlsmod)  # correction vector
    obs  = obs_df$resid
    
    windows()
    acf_after = ccf(corr, obs, lag.max=12)
    acf_after
    

    Comment by AJ — April 17, 2013 @ 7:08 am

    • That’s certainly the right kind of behaviour.

      It would be nice to find an alternative to the 2-box model, or exponential models in general, because they tend to be very ill behaved in quadratic refinement (very long thin curved valleys in the target function running diagonally to the variables), which means that the uncertainty estimates are unhelpful. If we could get good uncertainty estimates on the response function, then these could feed in to the adjusted temperature trend uncertainties. This may settle the question on whether the uncertainty on the adjusted trend is higher (due to uncertainty in the adjustements) or lower (due to removing some of the short term variations) than on the raw temperature series. The volcanic term is critical here – at the moment it looks to me as though the ENSO term is robust, and the solar fairly robust (and small) once a response function is included.

      I think the idea target from a refinement point of view would be composed of basis functions with compact support on the time axis, and increasing spacing as time goes on to compensate for the weaker signal. I did some experiments with quadratic B splines on a logarithmic time axis, but I can’t beat the fit of the 2-box model. (Actually, 1-box plus transient gives a better AIC on annual data.)

      Comment by Kevin C — April 18, 2013 @ 2:18 am

    • Oh, just to add:
      Reading data directly from KNMI? Wow! That’ll make my life so much easier.

      Comment by Kevin C — April 18, 2013 @ 4:44 am

      • Kevin… just a few notes:

        In the example above, I have the nls function interpolating tau, the multiplication factor to apply to the forcing, and the time exponent. This is sensitive to start and end dates. For example, the code above gives a time exponent value of about 1, whereas if I set force_styr=1940 and eval_styr=1950, the value comes out to about 0.5. In another case where I was using SOI, the time exponent was about 1.5, which was a real head-scrather. A value >1 produces a slow-fast-slow response.

        I compared my correction with that obtained by Lucia using the Thompson et al (2008) Cold-Tongue method. The results for the post 1950 period were in high agreement with a correlation coefficient > 0.90 (IIRC). A link to Lucia’s code can be found in this page:
        http://rankexploits.com/musings/2013/uncertainty-intervals-in-knight-the-toy-problem/

        Last night I ran the above code on the period 1979-2010 to produce an ENSO corrected HadCRUT4. I then ran it again with Troy’s optical depth data as the forcing. The resulting century trend for the 1996-2010 period was +0.11C, which is close to your value. The correction curve looked similar to yours.

        My comment for the pipe_out function should have read “simply amount in – amount accumulated in the pipeline”.

        I’m not sure why WordPress didn’t interpret my R code quite correctly. It tried, but failed on certain characters. I don’t believe it was a problem with my source code tags.

        A few months ago I used a similar approach which compared total annual forcing anomalies with HadCRUT4 non-ENSO corrected data. In this case I used the “integrate” function on forcing rates obtained from the “smooth.spline” function with a “derive=1” option. This was probably overkill. Additionally I was seeing if I could reproduce the 1% to 2x model experiments. Not sure if it is of interest to anyone besides myself, but here’s the link anyway:

        https://sites.google.com/site/climateadj/simple-model-of-models

        Comment by AJ — April 18, 2013 @ 7:44 am

  5. I was trying to run a similar type of analysis on a region – noticed the same thing – my regression using the SATO index (NH) is missing a portion of the volcanic response. I think using an exponential decay function might improve the results but any relatively easy to implement suggestions? This isn’t aimed at detecting changes in trend but is rather aimed at modeling natural variability.

    Comment by R — April 24, 2013 @ 12:29 pm

    • Hi R,

      As I saw from Kevin C’s implementation, the easiest way to do this seems to be convolution. So for a one-box model, you are looking at something like this:

      g_x<-exp(t/tau)
      g_x<-g_x/sum(g_x)
      forcingEffect<-convolve(forcing, g_x, type='o')
      test.lm<-lm(temperature ~ forcingEffect)
      

      Where t is your array of months, and forcing (also an array) contains the forcing value for each month. You can run a regression with the observed temperature and forcingEffect to get that coefficient, but you are still left to determine the best value for tau…I just used a loop to find it rather than finding an analytical solution.

      Hope that helps.

      Comment by troyca — April 25, 2013 @ 10:13 am

      • Hi Troyca,

        to get forcing I multiplied aerosol depth by -23 which is on the Giss website as the effective radiative forcing. However, i’m not sure what tau is actually?

        Comment by R — April 27, 2013 @ 10:03 am

      • There’s an additional problem that you run into with these types of models – for instance the impacts of volcanic eruptions may be seasonally dependent whereby summers show a stronger impact on temperatures as opposed to winters etc… so if you run a regression on monthly data that would be problematic – likewise the Arctic Oscillation is strongly linked to volcanic eruptions and causes an impact in the high latitude NH. The issue of Seasonal dependency is important though because regression would simply smear out a signal with an overestimation of the relationship for months that might show no relationship and underestimates of the influence in months that have a stronger relationship (summer). I noticed this in a small regional study at least.

        Using the convulution suggestion above I am getting not a particularly strong relationship – weaker in fact than the raw forcing data gives (i’ve tried tau 2:100, way more than is needed).

        Comment by R — April 27, 2013 @ 11:07 am

      • R,

        In this case, I’m using tau to refer to the decay time constant / e-folding time. As the solution to the simple energy balance equation frequently used, it is equal to C/lambda, where C is the mixed-layer heat capacity and lambda is the strength of radiative restoration. Regarding the -23 from GISS, I believe that is an approximate value referring to the global effect, so regional modeling might be more difficult.

        If there is a strong seasonal dependence in the region you are examining, there are certainly additional challenges, and I suppose you could try using a sine wave with the appropriate phase shift and a period of a year as a first pass, but I think doing regional studies you are going to encounter larger issues. As you indicate, you could first try using the method on annual data first. However, I would think with regional studies you would need to use something other than the global value for optical depth (use the gridded values over the regions that apply), and need to include a heat transport in/out of the region as an additional term.

        Comment by troyca — April 28, 2013 @ 4:23 pm

  6. Try,

    This is a very good series of three posts; I do hope you find time to write it up, since a formal rebuttal of F&R has not yet appeared, even though lots of folks have commented on the weaknesses of the F&R multiple regression model. There are other lines of argument for why F&R is weak and the paper’s conclusions dubious at best, but I think this series of posts is the strongest I have heard. Kudos.

    Comment by steve fitzpatrick — April 24, 2013 @ 6:53 pm

    • Thanks Steve, yes, the plan is to still write it up, but IMO there is still a bit more work to do!

      Comment by troyca — April 28, 2013 @ 4:13 pm

  7. Troyca, thanks for your analysis and your sober explanation. I just stumbled upon this analysis for the first time. May I check that I understand its implications? Looking at this with some technical background, but not in climate science, here’s what I think you’re saying:

    (a) If there is a pause in the underlying (non-volcano/solar/ENSO) trend since 1996, the Foster-Rahmstorf method will fail to see it by underestimating the warming contribution from recovery from volcanoes and overestimating the cooling contribution from solar variability. (b) If the underlying trend is approximately linear, the Foster-Rahmstorf method will get the magnitude of that trend roughly correct.

    If (a) is correct, Foster-Rahmstorf’s analysis doesn’t prove that there has been no slowdown in greenhouse-gas global warming. On the other hand, if (b) is correct, the leveling off of the measured temperature trend in recent years doesn’t disprove the existence of continued global warming, either.

    Is that a fair summary of the implications of your analysis?

    Comment by Bill Avrin — May 5, 2013 @ 11:30 am

    • Hi Bill,

      Yes, I think that is a good summary. The only additional point I would make is that currently the FR analysis seems to find a very small response to volcanoes, which to me hints at a non-linear underlying trend, otherwise it is fairly difficult to reconcile the low sensitivity implied by such a response with the rate of warming implied by an underlying linear trend.

      Comment by troyca — May 6, 2013 @ 6:33 am

      • Thanks, Troyca. I had two other thoughts:

        First, it seems like the regression coefficients must get really unstable (that is, whatever matrix you have to invert must be less well conditioned than one would like) because the volcano response is varying over a timescale comparable to that of other things going on in the system. It looked like the volcano response from the GCM ensembles decayed over a time constant of eight years or so. The solar brightness must vary on several-year timescales as well. Several years of El Nino followed by several with of La Nina might also create a temperature variation with similar characteristic timescales. What happens to the stability of the regression, when you allow for that longer timescale in the volcano response?

        Second, following up your point about the low volcano sensitivity hinting at a non-linear underlying trend: As I understand it, the FR analysis also gets too big a solar brightness response. If we take that literally, it would seem to imply a high sensitivity to the sun, but a low one to shading by volcanic aerosols. Granted, shading from aerosols isn’t the same phenomenon as brightening or dimming of the sun, but I have the impression that the current physical understanding points to reasonably similar sensitivities to both. How does that fit in with your analysis of the underlying trend?

        Comment by Bill Avrin — May 6, 2013 @ 9:34 am

      • Hi Bill,

        It seems like you have an excellent handle on the situation. In the previous post in this series (#2), I had used the exponential decay function for the response per Kevin C’s suggestion, which allowed for a longer volcanic response but still was not able to capture that true response when it was also assuming an underlying linear trend (if there was indeed no underlying linear trend), in part because that volcanic recovery has something very much resembling a linear climb around the same time the “pause” theoretically occurs.

        You are also correct about the sun vs. volcanic sensitivities. If I recall correctly, the original FR paper showed something like 7 times more sensitivity to the same magnitude solar forcing than the volcanic forcing, but after correcting the graph it turned out to be like 4 times more sensitivity. Either way, this doesn’t really make sense according to our current understanding – I would think the efficacy factor might be 1.5 at most to be physically realistic. My comment on sensitivity vs. underlying trend should have been specifically referring to the volcanic response – if the FR diagnosis of the volcanic response is to be believed (which again if I recall correctly corresponds to a 2xCO2 sensitivity of ~0.4 K), then it doesn’t make a lot of sense within the context of the underlying trend diagnosis in that paper, which implies a much higher sensitivity.

        Comment by troyca — May 8, 2013 @ 8:08 am

      • Thanks, Troyca, this is very helpful. Also, this is a lot of fun to think about.

        I didn’t get happens to the stability of the regression coefficients, when you allow for that longer-lasting volcano response. That is, have you done an analysis of how much the results vary with small perturbations in the data or assumptions? If the regression coefficients, that is, if the equations you invert aren’t that well conditioned, that might explain some of the seeming contradictions.

        Looking at the volcanic + solar ensemble means, it’s conceivable that the volcano response isn’t as long-lasting as a first glance might make it appear. It’s too bad we don’t have a volcano-only ensemble. The responses after earlier volcanoes seem shorter to my eye than the one 1992. Naively, I would have thought that such a short-lived forcing would produce a short-lived response, because the cooling wouldn’t have time to soak deeply enough into the oceans.

        Have you any ideas of plausible physical mechanisms for a non-linear underlying (non-ENSO/non-solar/non-volcano) trend?

        Thanks again. It’s intriguing stuff.

        Bill

        Comment by Bill Avrin — May 8, 2013 @ 8:47 am

      • Hi Bill,

        I agree it is quite interesting! Sadly, I have not had time to do much analysis WRT stability. There are a few volcanic-only runs (mostly from GISS), but certainly not the variety you get with the natural only. At this point I have not really investigated physical mechanisms for a non-linear underlying trend, so it would really only be speculating based on what others have said.

        Comment by troyca — May 9, 2013 @ 7:55 am

  8. […] has been a while since I posted the first three parts of a series on whether using multiple linear regressions to remove the solar, volcanic, and ENSO […]

    Pingback by Another “reconstruction” of underlying temperatures from 1979-2012 | Troy's Scratchpad — May 21, 2013 @ 8:26 am

  9. […] · CMIP5 multi-model mean for natural only comes from my previous survey […]

    Pingback by Breaking down the discrepancy between modeled and observed temperatures during the “hiatus” | Troy's Scratchpad — February 21, 2014 @ 10:21 pm


RSS feed for comments on this post. TrackBack URI

Leave a comment

Blog at WordPress.com.