For a quick summary, Part 1 simply ran the PHA on datasets with an underlying trend and a stochastic component. Part 2 added in a strong UHI bias, and attempted to see how well the PHA was able to remove this bias. I believe that post highlighted an area of concern — that the PHA can potentially remove traces of a UHI signal without removing the contamination itself — which had not, to my knowledge, been previously shown with the actual algorithm.
In this (final?) segment, I insert additional inhomogeneities (step and trend) to see how this affects the PHA. This will go on top of the UHI contamination already present.
There are two separate steps here.
1) Based on the probability of a station change occurring within a year, we determine if a particular month will be the month that a station changes. If so, a randomly selected step change and trend change are inserted at the month, and subsequent monthly temperatures are adjusted accordingly.
2) Furthermore, we determine if that station will undergo an “instrumentation” change based on the specified probability. This instrumentation change will insert a cooling bias centered around 1985, to roughly simulate the CRS to MMTS change we see in the USCHNv2 dataset.
Now, if we perform our 1970-2000 pairwise test on the data set that includes the UHI contamination and additional inhomogeneities, we get the following graph:
We see a significantly worse correlation here than in Part 2, obviously due to the inhomogeneities occurring right in the middle of the period we’re examining.
Running the PHA
After running the PHA on the dataset with UHI contamination and inhomogeneities, here is what we get for the average yearly anomalies:
The trends for the various sets are as follows:
Without UHI or Inhom: 0.191 Degrees/Decade
With UHI: 0.389 Degrees/Decade
With UHI + Inhom: 0.370 Degrees/Decade
PHA Adjusted From UHI + Inhom: 0.350 Degrees/Decade
As you can see, we do get a small cooling bias from the inhomogeneities, and the PHA does adjust the trend downwards. However, one interesting thing here to note is that the 0.350 Post-PHA trend is slightly larger than the 0.345 Post-PHA trend from Part 2, despite this part including the inhomogeneities with an overall cooling bias (and hence starting from a lower point). This may be due to UHI contamination bleeding into more stations because of the additional change-points detected.
So, how does it look if we try to search for our original UHI signal after the PHA?
Clearly, as predicted, the additional change-points have smeared this UHI signal even more. We might expect that with more changepoints (or those of greater effect) this would get diluted further.
As I said in part 2, I think these tests show that we should not expect the PHA to be a miracle worker. If there is a UHI signal in the RAW or TOB dataset that seems to disappear in the F52 dataset, it does not necessarily imply that the PHA has taken care of it. It may simply be that the blending of nearby stations has removed the signal, without removing the bias itself.