Scare Pollution is the story of Steve Milloy’s investigation of experiments the US Environmental Protection Agency conducted on human subjects with diesel exhaust. Milloy stumbled upon the EPA’s activities when it published a case report of a middle-aged woman who developed cardiac arrhythmias and needed to be taken to the hospital. It turned out she was one of several study subjects who were exposed to diesel exhaust piped into test chambers, and monitored.
EPA claims the purported notorious killer PM2.5 in diesel exhaust killed hundreds of thousands of people in the United States every year and needs to be regulated stringently.
Behind closed doors, when questions over the experiments arose the EPA had a remarkable defense: PM2.5 was actually not dangerous when inhaled in high concentrations, at all. It was just some harmless experimentation.
This kind of two-faced rhetoric is common in the climate debate. The latest example surrounds John Bates’ criticism of Karl et al 2015 (K15), a paper touting the effect of adjustments to the instrumental global average record.
Karl et al came out in 2015, some months before the Paris climate agreement. At the time climate consensusists were getting hammered by questions about the pause, an 18-year stretch starting 1997 that showed almost no increase in global temperatures. K15 ocean temperature adjustments tweaked the global average just enough to create an upward trend.
This is the headline Carbon Brief ran for the paper:
The authors were clear their paper affected the pause.
This was their title:
This was their abstract:
This was the editorial note to the paper:
To anyone, the paper was about the pause. It’s in the paper title, abstract, and accompanying press releases.
If you got your news from Zeke Hausfather or Victor Venema …
…you would think K15 almost had nothing to do with the pause.
Venema is fond of pushing the line that adjustments ‘reduce global warming.’ By focusing on the pause K15’s authors left themselves and the practice of adjustments open to the charge of manipulation of trends. Adjustments actually ‘make our estimate of global warming smaller,’ says Venema, as he castigates David Rose for printing John Bates’ objections.
So we have quite some irony here. Rose never mentions that the adjustments make our estimate of global warming smaller; that would not have fit into the conspiracy he is trying to sell.
The context was only slightly different but here he is in 2015, pushing the same line:
Being land creatures people do not always realise how big the ocean is, but 71% of the Earth is ocean. Thus if you combine these two temperature signals taking the area of the land and the ocean into account you get the result below. The net effect of the adjustments is a reduction of global warming.
It was the skeptics, and David Rose, who focus on the ‘right end’ of the global temperature (the pause):
But Rose is obsessed with the top panel. I made the graph extra large, so that you can see the differences. […] The “problem” is the minute change at the right end of the curves.
You can see Sou Bundanga pushing the same message here:
Applying the corrections to the sea surface temperature data reduces, not increases, the rate of warming over the instrumental period. This is the opposite to what deniers often claim – that all adjustments increase warming!
She even includes a graph from the paper she annotated to drive home the point:
Here’s Realclimate’s Gavin Schmidt at it:
The second panel is useful, demonstrating that the net impact of all corrections to the raw measurements is to reduce the overall trend.
What Schmidt, Venema and the others perform here is pure misdirection.
‘Question Karl et al’s adjustments, will you? Look at all adjustments. We even reduce trends and global warming. You should have no problem buying Karl et al.’
The reality, almost no skeptic has questioned adjustments to the sea surface records of the 1910-1940 period. In fact, there are reasons to question them, apart from the straw-man arguments of Venema and Schmidt. These NOAA adjustments—which are present in ERSSTv3 and have nothing to do with Karl et al—by reducing the 1910-’40 rate, make temperatures match climate models more easily. They reduce an inconveniently high rate of warming during a period with reduced anthropogenic CO2.
Speaking of complaints about reduced rates, here is the effect of NOAA’s methods on the 1940-1979 period, compared to HADCRUT:
That’s right – by reducing the rate of cooling, NOAA renders 1945 – 1974 as a warming period!
No one objected to adjustments because they increase a so-called ‘overall trend,’ a metric that involves ridiculously drawing a straight line from 1880-2015 right through the many ups and downs. If you examine the paper itself, you will see it makes only scant mention of the ‘overall trend.’
Embarrassingly for Schmidt/Venema, K15 make clear its own adjustments have no effect on the full period of record (emphasis mine)
For the full period of record (1880–present) (Fig. 2), the new global analysis has essentially the same rate of warming as that of the previous analysis (0.068°C decade−1 and 0.065°C decade−1, respectively) …
K15 state explicitly their adjustments mainly impact the pause:
…reinforcing the point that the new corrections mainly have an impact in recent decades.
This is Carbon Brief in their article on K15 (emphasis mine)
While the authors apply their corrections to the full temperature record stretching back to 1880, the biggest impact is on the rate of warming in recent decades, say the authors.
If misdirection was not enough …
.. confusion is further propagated by misquotation and quote surgery.
Take the example of this Zeke Hausfather tweet:
Hausfather is responding to David Rose’s article on Bates’ criticism of K15. As described, you can see Hausfather talking about NOAA adjustments in general, all taken together, making the exact opposite claim of the paper.
But there’s more. Look at the graph in the tweet which appears to have been created by him and an organization called ‘Climate Feedback‘.
The annotation at the top quotes Rose’s article reads “<<this resulted in the dramatic increase of the overall global trend>>,” making it appear as though Rose was talking about the 1880-2015 overall trend. Climate Feedback then responds (highlighted in yellow) by offering the now-familiar excuse that ‘all adjustments’ decrease the global warming trend.
But head over to the Daily Mail and it is plain Rose is talking about K15 adjustments to ship-buoy sea surface temperatures, affecting the 2000-2014 period, which in turn produced a dramatic increase in the ‘global trend.’
The sea dataset used by Thomas Karl and his colleagues – known as Extended Reconstructed Sea Surface Temperatures version 4, or ERSSTv4, tripled the warming trend over the sea during the years 2000 to 2014 from just 0.036C per decade – as stated in version 3 – to 0.099C per decade. Individual measurements in some parts of the globe had increased by about 0.1C and this resulted in the dramatic increase of the overall global trend published by the Pausebuster paper.
What adjustments were done in K15 to sea temperatures affected the global trend – big mystery there, isn’t it?
Climate Feedback and Hausfather have to rip out part of a sentence from its context, pretend its author is not saying what he is saying, but instead is something they have a pre-cooking talking point lined up for, in order to pretend they’re providing ‘feedback.’
If you are credible scientists why would you, repeatedly, counter criticism of K15 adjustments by pretending they were about ‘all adjustments’? These are not people who deserve to be taken seriously.
As it is practiced now, no distinct lines are drawn between changes that are needed as an integral part of deriving a global average temperature and adjustments that are justified on grounds of available data being less than ideal. The two are treated as though they were conceptually the one and the same. As much as possible, papers and their authors describe their work as an indispensable part of one amalgamated methodological continuum. This continuum however has no room to distinguish tweaks that produce changes of insignificant magnitude and more significant ones. The main purpose of deriving a global average temperature has shifted from one of monitoring changes over long periods of time, say decades, which requires a reasonably accurate but stable methodology and high-quality data sources, to one that chases the mirage of the ‘one true temperature,’ and increased precision in the service of media talking points and rebuttals to climate skeptics.
Adjustments are not questioned by skeptics because ‘they are produce increase warming.’ As they stand, adjustments reduce the rate of warming during a period of less anthropogenic influence and reduce the rate of post-WWII cooling. They slightly nudge up temperatures to convert a lack of a trend into a positive trend. In other words they seem to serve a variety of purposes, both political and scientific, at different points of time. Rather than cooling or warming overall, they appear to reduce the magnitude of natural variability that is likely present in the instrumental record, as each truth overwrites the previous one. The Climategate emails show the people in charge of deriving a global average openly discussing tweaking warming or cooling during various periods when talking about adjustments. The bias inherent in such a situation lies right in front of our eyes.
You’d think that the average weather of a location is climate? You’d be wrong. The real deal goes like this: there is something called climate which influences weather the effect of which we see in everyday readings such as temperature.
Speaking from a climatologic sense, both statements may be true. From a standpoint of measurement however, the second one makes no sense. It is false. The ‘climate’ is unknowable to us except via measurement of the weather points that constitute it.
This distinction is lost in several climate discussions and even amongst brilliant scientists. An outcome of the distinction, and a basic principle one might add, is that measurements need to be performed independent of effects the generated data are used to infer. Measurement precedes inference.
Standard climate thinking however has proceeded in opposite directions when it comes to creation of temperature series. Under this method, a local station will be ‘adjusted’ if it is deemed, by some metric, to not reflect underlying climate. (Whereas you might think underlying climate is inferred by a measurement of a given station)
I am stating nothing new here. David Stockwell presented the same basic earth-shattering logic (for climate science practitioners, that is) in this elegant write-up: Circularity of homogenization methods
He writes (emphasis mine):
If S is the target temperature series, and R is the regional climatology, then most algorithms that detect abrupt shifts in the mean level of temperature readings, also known as inhomogeneities, come down to testing for changes in the difference between R and S, i.e. D=S-R. The homogenization of S, or H(S), is the adjustment of S by the magnitude of the change in the difference series D.
When this homogenization process is written out as an equation, it is clear that homogenization of S is simply the replacement of S with the regional climatologyR.
H(S) = S-D = S-(S-R) = R
While homogenization algorithms do not apply D to S exactly, they do apply the shifts in baseline to S, and so coerce the trend in S to the trend in the regional climatology.
Stockwell further states (emphasis mine):
I would think the determination of adjustments would need to be completely independent of the larger trends, which would rule out most commonly used homogenization methods
There is proof confirming this diagnosis. Censorship-meisters Realclimate published a post by Zeke Hausfather on his paper co-authored with an US federal government NOOA employee.
They state the rationale underpinning the paper’s methodology:
Any major changes over time in individual stations that are not reflected in nearby stations are likely due to local (rather than regional) effects such as station moves, instrument changes, time of observation changes, or even such things as a tree growing over the thermometer stand. By removing any artifacts of individual station records not shared with other stations in their region, we can get a more accurate estimate of regional climate changes.
anomalies only work well IF the station records are not subject to localized changes due to non-climatic factors. In practice, at least over time spans of decades, this is rarely the case. So additional work (e.g. homogenization) must be done to remove any local perturbations that are not reflected in the regional climatology. Again, because longer-term climate changes occur regionally (not locally), and perturbation of a local record not reflected in other nearby stations is likely a non-climatic factor and should be removed if your goal is to calculate an unbiased estimate of regional climate changes over time.
This is circular.
An unbiased estimate of ‘regional climate change’ should emerge from well-curated, unadjusted temperature records of stations. Hausfather thinks the signal of climate change should be teased out by the guiding hand of adjustments made to stations.
What adjustments you do to a station, should have nothing to do with climate. Any reasoning that violates the above fails. Such adjusted series may per chance be representative of regional climate. But we would lack the means of knowing them to be so.
It is sobering to realize that most homogenization methods in use could be afflicted by this logic.