The nonsensical ‘97%’ number has became entrenched in climate propaganda. At one time, papers by William Anderegg and Peter Doran were employed to promote the figure. This may come as a surprise but neither paper can support it. What is incredible is that researchers like Bart Verheggen, who unlike John Cook and his associates, can reasonably be expected to be more balanced, promote and believe Anderegg et al 2010 supports the ‘97% consensus’ claim.
Take a look at this:
You conduct a study in which you classify people as ‘Convinced by the Evidence’ or ‘The Unconvinced’. According to your definition, ‘Convinced by the Evidence’ are those who wholly believe in the human effect on climate, as laid out in a certain intergovernmental report.
You fill most (~70%) of the ‘Convinced by the Evidence’ category, with names drawn from authors of the same intergovernmental report. Because after all, they were the ones who wrote the very report that forms your criteria.
This type of reasoning is circular inference. The people in the CE category are in there, by virtue of fulfilling criteria attributed to material they themselves put together: you declare the IPCC to be ‘consensus’, you include IPCC authors into the consensus group for having written the IPCC report. Voila!
The authors place 619 climate researchers’ names in the CE category from the author lists of the IPCC Working Group I Fourth Assessment report; they add 284 from voluntarily-signed statements by scientists, bringing the total to 903. When researchers with less than 20 peer-reviewed papers were excluded, the total shrinks to 817. Even if one assumes all 86 who were removed were solely IPCC authors, one is left with 533 names. In other words, a substantial 65% of the final ‘Convinced of the Evidence’ is a result of flawed methodology. Names for the ‘Unconvinced’ (UE) were pulled together from signed statements indicating dissent with IPCC orthodoxy.
Does the flaw affect the authors’ conclusions? Prior to application of the chosen ‘expert credibility’ metric, i.e., ‘publication of >20 peer-reviewed papers’, the numbers in the two categories are: CE 903, and UE 472. After its application, these become CE 817 and UE 93. The CE category remains significantly undiminished (p<0.0001, chi-square) due to a high proportion of its members being IPCC WG1 authors. Scientists are chosen as IPCC authors by virtue of being academically active in their field of study – the very criterion evaluated by the ’20 publications’ cutoff.
Thus, contrary to the authors’ claim about publication cutoffs not ‘differentially favoring’ a group, their method does have such an effect. The first error lies in the preanalytic step – one category is topped up with active scientists, selected by non-independent means. The circularity persists in the analytic step – the groups are then tested to see which of them has more active scientists.
The authors apply numerical metrics to identify the category (CE vs UE) that has greater expertise — more scientists implies more expertise, and more publications implies more expertise. They study (a) number of total climate publications, (b) top 50 most-published researchers, and (c) average citation count of second through fourth most cited papers.
The circularity however renders such exercises essentially uninformative. All Andergegg et al can tell us, is that actively publishing scientists – like those who are invited to write IPCC reports – usually have >20 papers to their credit. One would hope this to be the case.