I saw this video at David Appell’s blog, as well as Eli Rabbett’s. For full disclosure, I think that a < 2C ECS is as likely as not, as my recent paper suggests. Furthermore, I previously had a paper published noting the sensitivity of a previous cloud feedback paper by Dr. Dessler’s to dataset choice.
While I think the beginning of video is a fair introduction to the concept of sensitivity and feedback, there basically seem to be 3 arguments that the video makes as to why sensitivity is unlikely to be < 2C, and I do not find them particularly convincing:
1) "From data, you can get that f is about 0.6"
Obviously, there is a lot of "data", and a lot of different methods to interpret this data, all which give different estimates for ECS (and hence f). In this case, the reference is to Dessler 2013, J Climate, which is not only just one dataset/method combination, but also one that is ill-suited for the purpose of determining ECS. Note the following huge caveat of the D13 paper:
Second, the differences in the feedbacks between the control model runs and the A1B model runs stress that one should be careful in applying conclusions about the feedbacks derived from internal variability to longer-term climate change. [my bold]
Essentially, this caveat is necessary to pass review because when the method employed by D13 (using ENSO-induced flux variations to derive sensitivity) is applied to the models, it yields a "thermal damping rate" of -0.6 W/m^2/K, which corresponds to a sensitivity of 6.2 K. Compare that to the more relevant long-term sensitivity calculated from the A1B ensemble of 2.93 K (thermal damping of -1.26 W/m^2/K), and it’s clear that the D13 paper shows that its own method overestimated the sensitivity by a factor of more than 2. Essentially, if the method fails to adequately diagnose the ECS of the models when it is applied to that data, we’re not likely to have confidence that the method can diagnose the real-world sensitivity.
Several other papers have noted the shortcomings in applying the ENSO-induced inter-annual fluctuations to calculating climate-scale feedbacks, and it is a topic I have spent a good amount of time on this blog discussing. For example, Colman and Hanson (2012):
A comparison is also made of model feedbacks with reanalysis derived feedbacks for seasonal and interannual timescales. No strong relationships between individual modelled feedbacks at different timescales are evident: i.e., strong feedbacks in models at variability timescales do not in general predict strong climate change feedback, with the possible exception of seasonal timescales. [D13 uses interannual timescales].
This method is essentially the same approach used by Forster and Gregory (2006), which estimated an ECS of 1.6 K. They obtained different values by using ERBE (rather than CERES) and a different time period, which should give an indication that the method is not particularly robust. It is also quite sensitive to various methodological choices (using tropospheric rather than surface temps, using lead/lags), which can yield highly different results in sensitivity (as low as 0.6 K in the Lindzen and Choi, 2011case), but none of them seem to capture the ECS when applied to models, nor do any offer compelling arguments why these choices should make such a huge difference if indeed they are estimates of ECS. Lest you think I am picking only on "higher sensitivity" results for this method, I have noted similar shortcomings up the Lindzen and Choi (2011) and more recently the Bjornbom discussion paper.
A while back, I submitted a paper using this type of approach (using satellite fluxes and temperatures to estimate ECS), and several reviewers pointed out that there is little evidence or validation suggesting that these short-term variations in globally-averaged quantities give an indication of longer-term sensitivity. After doing a good amount more research into the topic, I was inclined to agree. I would think a similar point was made by reviewers of the D13 paper, and hence the strong caveat mentioned above. What likely happens is that ENSO produces localized warming and feedbacks at different times during the evolution of a single phase, which means that when it is averaged together in a global quantity it says very little about a long-term response.
Consider the hubbub that has been made about the difference between ECS and "effective sensitivity", the latter of which was calculated from 50-150 years of data, based on the evolution of spatial warming over time [Armour et al. (2012)]. If we are going to note the deficiency of a century for calculating ECS, it is difficult to see how the inter-annual response will reveal much about ECS.
2) "…getting a much lower climate sensitivity, say below 2 degrees Celsius, would require a strongly negative cloud feedback."
I disagree! A strong negative cloud feedback is not required to end up with a sensitivity of < 2K. Generally, even a neutral cloud feedback will do the trick. For instance, if you look at Soden and Held (2006), 7 of the 12 models for which cloud feedback is presented would have a < 2K sensitivity in the absence of a cloud feedback. If the Planck response is ~ -3.2 W/m^2/K, the combined Water Vapor + Lapse Rate (better constrained than each individually) is ~ 1.0 W/m^2/K, and the surface albedo feedback ~ 0.3 W/m^2/K., this corresponds to a thermal damping of -1.9 W/m^2/K with no cloud feedback, or a sensitivity of 1.95 K. Basically, if one was 50-50 on whether cloud feedback was positive or negative, I would say a < 2K sensitivity is more likely than not (albeit barely). Even a slightly negative cloud feedback (~ -0.3 K) would almost ensure a ECS < 2K.
3) "…because we don’t know the forcing, I don’t look at the estimates of climate sensitivities from these studies [using 20th century observations] to be very meaningful."
The dismissal of essentially all recent estimates of ECS from investigation into the 20th-21st century climate based on uncertainty in forcing is highly dubious, in my opinion. First of all, almost all of these studies explicitly take into account the uncertainties in forcing. In fact, my paper shows that even in spite of the uncertainty in forcing, the method used gives a good indication of longer-term sensitivity as tested on the CMIP5 models. These methods have the additional advantage of deriving climate-scale feedbacks, something lacking in the methodology of D13 (as admitted in the paper). Moreover, updated estimates for aerosols suggest that previously these effects were over-estimated…so reworking my paper with smaller impacts would actually lower the estimate of ECS.
If we are dismissing methods based on uncertainty, there should be little made of paleo estimates, all of which have even greater uncertainty in the forcing, along will large uncertainties in the temperature changes. The argument of "we have uncertainty, therefore we know nothing" is an oft-criticized argument (and I agree with the criticism), because everything has uncertainty. But when this uncertainty is explicitly (and correctly) taken into account, there is no reason to discard the estimates.
Essentially, even if one weighed all the estimates using inter-annual flux variations with temperature changes (among which D13 is one estimate), it would be difficult to say that < 2.0 K ECS is "unlikely". If one took into consideration the 20th-21st century estimates as well, it is near impossible to call it "unlikely". Obviously, science is not democratic and some papers are better than others…but it is only by specifically focusing on the D13 results (using a method that seems to fail validation tests) while discarding the 20th century energy balance observations (a method which "passes" similar validation tests) that the video claims a < 2K sensitivity is unlikely.