In the Cook group paper, the degree of acceptance of a ‘consensus’ in climate literature is measured. Here, I address a simple question that has hovered around the paper from the time it made its appearance‡, the ‘implicit endorsers’ of anthropogenic warming. (for e.g., see here and here for discussion)
Shown below in the top panel, is total papers Cook et al classified in their project. An abrupt increase in climate-related papers is seen after 2005. From the bottom panel, it is evident that papers that don’t state a position on anthropogenic global warming make up most of the rise.
Now, Cook and colleagues have spread the message wide that 97% of a ‘large number of scientific abstracts’ support anthropogenic global warming (examples are, here, here, here and here). From the University of Queensland’s press release:
About 97 per cent of 4000 international scientific papers analysed in a University of Queensland-led study were rated as endorsing human-caused global warming.
How this happened is known: a large number of papers not stating a position on AGW were classified as ‘implicitly’ accepting the orthodox climate position.
What is the risk an abstract is classified as an implicit endorser? Of the 7 major categories, 4 are based on explicit statements in abstracts and these items are not susceptible. Category ‘5’ is for abstracts that imply rejection of AGW and is less likely to be mistaken as well. It is the remaining large number of papers with no stated position on anthropogenic warming, that are at risk†.
From their data, it can be determined that roughly close to a third of at risk abstracts were classified as ‘implicit endorsers’ (median 27%, range: 19-43%).
Now, turn to an another aspect of the study. Every paper got two ratings from two persons and there was an error rate. In estimating how this error acts, it is evident the considerations noted earlier apply again. Abstracts with explicit statements are less likely to be erroneously classified. Papers rejecting the orthodox position are less likely to be interpreted across the divide. The same neutral papers identified above would likely be most affected by error in volunteers’ classification.
With the above two aspects, examine the data shown below (Figure 2):
In Figure 2, the left panel shows the fraction of papers with no stated position that got classified as ‘implicit endorsers’. On the right is Cook’s error rate (0.33) applied to papers that are most susceptible. Do the two look similar?
Indeed the two quantities track close to another, especially before 2005. Their correlation (Figure 3) is statistically significant (p<<0.05, Spearman).
Is the similarity between implicit endorsements and the error fraction (Figure 4) a co-incidence? It is possible. But it is pointing to a basic observation: the implicit endorsement category is nothing but the error in the classification exercise.
With the exercise undertaken by Cook et al, if a handful of raters are given a mass of neutral abstracts, about a third would be classified as “implicit endorsers”, regardless.
This explanation reconciles several observations. It accounts for the fact that total ‘endorsements’ and no position (category ‘4’) abstracts seem inversely related:. The ‘implicits’, which form the bulk of the endorsements, interact with the no position abstracts in reciprocal fashion during classification: these are just categories where one can be mistaken for the other. Cook’s convoluted explanation is wrong: papers of these putative groups do not interact reciprocally in the real world.
Furthermore, it explains the steady proportions of categories observed (Figure 5). Which is more likely? That thousands of scientists working in hundreds of disparate fields write an ever-increasing number of scientific papers that somehow show a near-constant fraction of papers ‘implicitly endorsing’ an orthodox position? Or, that a handful of volunteers classify papers with a method that affects their results in a roughly uniform manner?
The ‘implicit endorse’ category Cook’s group invented, illustrates devilish intricacies that can arise in classification studies. Papers were added to the category merely because a predetermined rating system suggested it to volunteers, who then went looking for it. It serves as a paradigm that illustrates how researchers can imprint methodological and observer biases on material they set out to study.
‡ Glancing at this table is not essential but very useful
† (i.e., papers shown in blue in bottom panel, Figure 1, minus ‘implict’ rejectors)
Cook’s team has refused to release the error discrepancy data to date.