The nonsensical ‘97%’ number has became entrenched in climate propaganda. At one time, papers by William Anderegg and Peter Doran were employed to promote the figure. This may come as a surprise but neither paper can support it. What is incredible is that researchers like Bart Verheggen, who unlike John Cook and his associates, can reasonably be expected to be more balanced, promote and believe Anderegg et al 2010 supports the ‘97% consensus’ claim.
Take a look at this:
You conduct a study in which you classify people as ‘Convinced by the Evidence’ or ‘The Unconvinced’. According to your definition, ‘Convinced by the Evidence’ are those who wholly believe in the human effect on climate, as laid out in a certain intergovernmental report.
You fill most (~70%) of the ‘Convinced by the Evidence’ category, with names drawn from authors of the same intergovernmental report. Because after all, they were the ones who wrote the very report that forms your criteria.
This type of reasoning is circular inference. The people in the CE category are in there, by virtue of fulfilling criteria attributed to material they themselves put together: you declare the IPCC to be ‘consensus’, you include IPCC authors into the consensus group for having written the IPCC report. Voila!
The authors place 619 climate researchers’ names in the CE category from the author lists of the IPCC Working Group I Fourth Assessment report; they add 284 from voluntarily-signed statements by scientists, bringing the total to 903. When researchers with less than 20 peer-reviewed papers were excluded, the total shrinks to 817. Even if one assumes all 86 who were removed were solely IPCC authors, one is left with 533 names. In other words, a substantial 65% of the final ‘Convinced of the Evidence’ is a result of flawed methodology. Names for the ‘Unconvinced’ (UE) were pulled together from signed statements indicating dissent with IPCC orthodoxy.
Does the flaw affect the authors’ conclusions? Prior to application of the chosen ‘expert credibility’ metric, i.e., ‘publication of >20 peer-reviewed papers’, the numbers in the two categories are: CE 903, and UE 472. After its application, these become CE 817 and UE 93. The CE category remains significantly undiminished (p<0.0001, chi-square) due to a high proportion of its members being IPCC WG1 authors. Scientists are chosen as IPCC authors by virtue of being academically active in their field of study – the very criterion evaluated by the ’20 publications’ cutoff.
Thus, contrary to the authors’ claim about publication cutoffs not ‘differentially favoring’ a group, their method does have such an effect. The first error lies in the preanalytic step – one category is topped up with active scientists, selected by non-independent means. The circularity persists in the analytic step – the groups are then tested to see which of them has more active scientists.
The authors apply numerical metrics to identify the category (CE vs UE) that has greater expertise — more scientists implies more expertise, and more publications implies more expertise. They study (a) number of total climate publications, (b) top 50 most-published researchers, and (c) average citation count of second through fourth most cited papers.
The circularity however renders such exercises essentially uninformative. All Andergegg et al can tell us, is that actively publishing scientists – like those who are invited to write IPCC reports – usually have >20 papers to their credit. One would hope this to be the case.
Lewandowsky’s ‘Recursive Fury’ – the subject of many posts here – has been retracted by the journal Frontiers in Psychology. The news of the retraction came pre-packaged with spin and bluster – on how only legal issues affected the journal’s decision and how Lewandowsky’s former employer was still hosting the paper’s pdf draft.
But actions speak louder than words. The question in front of the journal was two-fold: (a) the risk of legal action if the paper was published, and (b) its chances in court in the event of litigation. It would be fair to say their answers were: (a) not insignificant, and (b) quite poor.
The journal’s instincts are on display in FOI documents (pdf) from the University of Western Australia. It set up an external team of senior academics to evaluate the paper and complaints. The journal put polite but pointed questions to the UWA office.
In turn, the university extracted compliance to a gag order from the journal:
Why would UWA not want the ethics report not be made public, and want the journal roped in? This was before the decision to retract was made. With the information available, it is evident the paper underwent no formal ethics review. If true, this would have been immensely damaging to both the paper’s authors and the journal.
Lewandowsky and his co-authors are said to have signed gag orders as well. However, with the release of a 45-min video, and write-ups in the Guardian, Shaping Tomorrow’s World and numerous other venues pushing his narrative, it’s not clear what gagging is taking place at all.
What complainant names is Lewandowsky protecting by not disclosing names? The same people whom he defamed by labeling them conspiracists in his paper?
The so-called gag is of the same kind thrown up as reason for not revealing which skeptical bloggers Lewandowsky sent his Moon Hoax survey to. In both instances, the involved people whose names he refused to utter sprung forward of their accord to identify themselves publicly.
It doesn’t match with the FOI material (pictured above) which shows UWA to have demanded silence from Frontiers academics.
The journal didn’t exactly cover itself in glory either. The numerous switches and changes it made to reviewers reflect the difficulty it had finding someone suitable. The final two reviewers are a revealing pair. Reviewer one – Viren Swami – was in addition special topics editor for the issue the paper appeared in. Reviewer two was a former UWA graduate and current journalism PhD candidate one Elaine McKewon. A committed climate consensus supporter, she is hardly the objective person to be reviewing a paper on the psychologic profiles of allegedly conspiracist mental defectives she does not hesitate labeling ‘deniers’.
Arising from McIntyre’s digging to previously released FOI documents, it appears Lewandowsky himself co-wrote portions of UWA’s ethics report inquiring into his previous ‘Moon Hoax’ paper. You can bet the senior academics on Frontiers’ panel must be wondering about the provenance of material UWA fed them leading them to conclude there were no issues with the ethical aspects of the present paper.
The doctored quote in Michael Mann’s legal reply brought to attention by Climateaudit is doing its rounds now.
Doctored quotes? Guess where my first reaction was to look.
Sure enough, this is what one finds on Skepticalscience:
In July 2010, the University of East Anglia published the Independent Climate Change Email Review report. They examined the emails to assess whether manipulation or suppression of data occurred and concluded that “The scientists’ rigor and honesty are not in doubt”. (emphasis added)
How oddly coincidental. The exact same wording seen in Michael Mann’s 2013 legal memorandum — “…whether manipulation or suppression of data occurred and concluded that “The scientists’ rigor and honesty are not in doubt”— shows up in John Cook’s 2010 web page, including non-Australian spelling.
A quick Google search turns up several sources which contain the same phrasing but they lead back to Cook’s site. No one else seems to have worded anything Climategate-related quite this way.
Cook if we remember, enthusiastically farmed out the services of his website and followers to Mann upon his request. His behind-the-scenes collaboration with Mann in manufacturing web pages for the express purpose of defending Mann against criticism from Richard Muller is well-documented.
In July 2012 when Mann filed suit against the Competitive Enterprise Institute and Mark Steyn Skepticalscience was there supporting Mann linking to the same page above.
In the Cook group paper, the ‘authors’ measure the degree of acceptance of a ‘consensus’ in climate literature.
Remarkably enough, this is what they find:
From 1991 to 2011, the fractions of papers accepting the orthodox position decrease with time (Figure 1 & 2).
Of the papers said to have accepted a consensus position, the major fraction, declines from 33% to about 24% ( ‘implicit endorsers’, ‘3’ in Figure 1) (Figure 2).
Papers that explicitly support the consensus position (‘2’ in the graph) also decline.
Cook and co-authors say they identify ‘strengthening consensus’, among other increasing consensus trends. The underlying data however does not support their claims. Instead, there is a remarkable stability in the overall composition of the literature. There is a steady increase in the proportion of neutral papers (called ‘No position’). In other words, no partisan category increases (or decreases) at the cost of another (Figure 3A & 3B).
Strangely enough, Cook and co-authors take note of these findings. Their interpretation however reveals a major problem in their analytic approach.
Cook and co-authors rationalize the decrease in the proportion of papers supporting the consensus, via a convoluted theory, as evidence for a high degree of consensus. They contend the decrease implies more papers have accepted the consensus and therefore don’t need to talk about it. At the same time, they take the increase in absolute numbers of orthodox position papers as evidence for ‘increasing consensus’.
The fallacy in reasoning is shown easily. Consider, as an example, a prosperous county which shows 60 cases of pneumonia in 1993. The media raises a hue and cry. Stung by criticism, the county institutes rigorous public health and education measures. Twenty years pass and a survey is undertaken. The cases of pneumonia for 2012 is 80. The media goes on a rampage. Is this justified?
It turns out that it is not. The county experienced a population boom in the early 2000’s and the incidence of pneumonia/100,000 per year actually fell during the period.
Now, imagine a mayor loudly criticizing local health officials for the increase in pneumonia cases, and a while later, traveling to a conference to boast that his city had lowered pneumonia rates due to measures undertaken by him.
This is exactly what Cook and his co-authors do. They put a different spin on two facets of the same observation.
Finally, if the proportions of papers accepting the orthodox position is decreasing, by what way does their actual number go up? The explanation it turns out is deceptively simple.
Cook et al studied 11944 papers for acceptance (or rejection) of AGW. There are papers which explicitly state something with respect to this question, and those that do not. As noted above (in Figures 1, 3A, & 3B), the overall composition of the literature remains more or less constant. Yet total numbers of papers published in the climate field increases dramatically during this period (Figure 4), particularly after 2005.
Examine the composition when the two groups are broken apart. Shown in the graph below, the light blue line is papers that don’t say anything explicit and the red line is for papers that make an explicit statement (Figure 5). As can be seen, the rise in number of papers seen in Figure 4 is almost entirely made up by papers that say nothing explicitly about anthropogenic warming.
In their study, Cook and co-authors include a significant chunk of the rising group into the ‘endorse the consensus’ category. In a widely circulated draft, Tol reaches the same conclusion: “the apparent trend in endorsement is thus a trend in composition rather than in endorsement”.
The inclusion of papers into the consensus from a group of papers that is increasing over time, makes the consensus appear to increase over time.
A lot of hard work went into it, no doubt. The mountain has laboured and brought forth a mouse.
What did the authors find? First, that about 32% of climate papers expressed a position on the cause of global warming. Fine. Second, of the papers that expressed a position, 97.1% ‘endorsed’ human-caused global warming. Accepted again.
Put two and two together. What does it tell us? That about 30% of climate papers ‘endorsed’ human-caused global warming.
This, after counting up every climate paper over the past twenty-two years – more than eleven thousand of them.
Nice ‘consensus’ you’ve got there guys.
In 2012, Stephen Lewandowsky and co-authors submitted a paper to the journal Psychological Science, generating widespread publicity. Here, I address a simple issue/question that has hovered around the paper from the time it made its appearance. The issue is at the heart of Lewandowsky’s first ‘Moon Hoax’ paper and the in-limbo second paper in Frontiers in Psychology.
The ‘Moon Hoax’ paper (a.k.a LOG12, LOG13 etc) draws a number of conclusions about climate skeptics (called ‘deniers’). A major portion of the data and analysis is devoted to ‘rejection of climate science’. The paper’s title advertises its findings about ‘deniers’.
So the question is: how did Lewandowsky and co-authors study climate skeptics?
The paper draft (pdf) stated simply that authors ‘approached’ 5 skeptic blogs to post a survey, but ‘none did’. This led to a hunt to find who exactly these bloggers were (Lewandowsky wouldn’t tell). Lewandowsky spread significant amounts of distraction and smoke on the matter, raising hue and cry that he did email skeptical bloggers:
First out of the gate was the accusation that I might not have contacted the 5 “skeptic” bloggers, none of whom posted links to my survey. Astute readers might wonder why I would mention this in the Method section, if I hadn’t contacted anyone.
What matters however, is not whether or not Lewandowsky contacted skeptics but what came of such contact. The whole point of contacting the bloggers was to get surveys posted on their websites to ensure skeptic participation. This never took place. Through the noise, the question of non-sampling of skeptics remained unresolved‡.
As a way of providing answer, the paper itself appeared in final form about a month back. When examined, the authors appear to have settled on a remarkable method of addressing the defect. In the supplementary information, Lewandowsky et al (LOG13) make a startling claim. They state the blogs that did carry their survey have a broad readership ‘as evidenced by the comment streams’:
All of the blogs that carried the link to the survey broadly endorsed the scientific consensus on climate change. As evidenced by the comment streams, however, their readership was broad and encompassed a wide range of view on climate change.
The authors claim to have analysed reader comments at one venue to determine this. They state:
To illustrate, a content analysis of 1067 comments from unique visitors to http://www.skepticalscience.com, conducted by the proprietor of the blog, revealed that around 20% (N = 222) held clearly “skeptical” views, with the remainder (N = 845) endorsing the scientific consensus.
Extrapolating, the authors infer further that close to eighty-thousand skeptics saw Lewandowsky’s survey on Skepticalscience alone (see below). Owing to such broad readership, enough skeptics are said to have been exposed to the survey.
Readers of climate blogs will at once see several things that are off. However, these are the assertions forming the basis on which Lewandowsky et al 2013 rests.
To start, the authors’ premises are accepted. It is deemed that comment streams can be analysed to determine whether a blog has a broad readership, or a more polarized one.
Comments on six blogs where Lewandowsky et al’s survey was posted were analysed. Commenter names and comment counts were obtained from web pages using R scripts. Following the authors’ method, this was carried out for the entire month the survey was posted. For each blog, duplicates were removed.
Commenters were classified as (a) skeptic, (b) ‘warmist’ (c) ‘non-skeptic’ (d) lukewarmer, (e) neutral, or (e) indeterminate. Regulars whose orientations are familiar (e.g., dana1981 – ‘warmist’) were tagged first. Those with insufficient information to classify, and infrequent posters with singleton comments were tagged ‘indeterminate’†.
The results are presented below. A total of 614 commenters contributed 4976 comments to six blogs in the month the survey was posted (range: 2 – 2387 comments/blog). An estimated 111 commenters posted across blogs, with 504 unique commenter aliases from all blogs.
The results show a skewed commenter profile. As a whole, there are 59 skeptical commenters, amounting to about 9.5% of total. Individually, skeptics range from 5-11% of commenters between blogs, with one venue (Hot Topic) showing 19% skeptics. Closer examination shows this to be made up by just 10 commenters. Non-skeptics are close to 80%, i.e., 480 of 614. Neutral posters are 9%, and indeterminate 3%. Of the 59, more than half are from comments posted at one blog (Deltoid).
The same pattern can been seen to repeat by blog:
The marked difference in comment number between the blogs obscures underlying similarities. When commenter proportions are made equal, these become plain:
From the data above it is evident these blogs are not places where readership is “broad” or encompasses a wide range of views on climate. To the contrary, these are highly polarized, partisan blogs serving their cliques. One half of the blogs hosted comments from all of 6 skeptical commenters in total (Scott Mandia, A Few Things Ill Considered, and Bickmore’s Climate Asylum).
The non-surveyed Skepticalscience.com
What about Skepticalscience’s comment stream? Lewandowsky et al state that John Cook at his website analyzed 1067 comments to identify 222 skeptics and use this to buttress claims of broad readership in survey blogs. One wonders how Cook got the fantastic figures! When commenters for Sept 2010 are analysed, there are 36 skeptical voices of a total 286. Cook’s estimates are inflated six times over. In reality skeptics form 12.58% of commenters for that month, and a mere 0.03 fraction of John Cook’s 1067 unique commenters. These results verify with independent analysis performed by A.Scott.
Furthermore, close to 90% of commenting viewers are not skeptics. Contrary to Lewandowsky et al, Skepticalscience is not a place where readership is “broad and encompasses a wide range of view on climate”. In fact Skepticalscience exactly matches Deltoid, a virulently anti-skeptic website, in commenter profile.
Importantly however, John Cook never posted the survey at Skepticalscience (see here and here). In the face of this false claim, the authors’ post-hoc exercise of computing skeptic exposure becomes counterfeit.
How would the picture have been had Lewandowsky et al actually obtained survey exposure with a skeptical audience? As a comparative exercise, I pulled comment counts from widely read skeptical blogs Wattsupwiththat, Bishop Hill, Joanne Nova and Climate Audit for the same period. Traffic figures provided by Anthony Watts indicate close to 3 million visits in August 2010. The results ought to be eye-opening:
A number of things can now be confirmed. The authors of Lewandowsky et al 2013 did not survey skeptical blogs. The websites that carried the survey have neither a broad readership, nor represented skeptical readers and commenters. The authors did not survey any readers at the website Skepticalscience, but represent their data and findings as though they did. Lastly, the authors’ calculations in assessing survey exposure, which they base on the same Skepticalscience, are shown to be wrong.
With the above, conclusions drawn about skeptics by Lewandowsky et al by sampling a population of readers and commenters who are not skeptic can be termed invalid. At best the study’s skeptic-related analysis is meaningless, arising from non-representative sampling. At worst the possibility of false conclusions owing to flawed survey exposure arises. The above data combined with Lewandowsky et al 2013 survey results, in fact, show one possible outcome of displaying loaded questions relating to climate skeptics to a non-skeptical audience. Conclusions about non-skeptical ‘pro-science’ commenters and their psychology are probably more appropriate.
‡ The list of surveyed blogs (from Lewandowsky et al 2013 SI):
Skepticalscience – http://www.skepticalscience.com
Tamino – Open Mind http://tamino.wordpress.com
Climate Asylum – http://bbickmore.wordpress.com
Climate change task force – http://www.trunity.net/uuuno/blogs/
A few things ill considered – http://scienceblogs.com/illconsidered/
Global Warming: Man or Myth? – http://profmandia.wordpress.com/
Deltoid – http://scienceblogs.com/deltoid/
Hot Topic – http://hot-topic.co.nz/
Note that (a) there is no record of Skepticalscience having posted the survey, and (b) the Climate Change Task Force entry is available on the Waybackmachine (for e.g., here)
† Batch Google searches (e.g., http://google.siliconglobe.co.uk/) and keyword searches on scraped HTML blog posts were used to search for commenter output. Multiple entries were frequently required for each commenter to be satisfactorily classified. Wherever possible (which was so in almost all instances), results during August and Sept 2010 were employed. Comments supportive of consensus, critical of ‘deniers’ and ‘skeptics’ and/or unequivocally appreciative of article (e.g., “great post, now I can use this in my arguments with deniers”) were classified as coming from ‘warmists’. Comments approving of main thrust of a ‘warmist’ blog post, but with no further information available were classified as ‘ns’ – not skeptic. Commenters questioning basic premises of blog post, being addressed to by ‘denier’, ‘denial’ etc, whose stance could be verified by similar mode of behaviour in other threads, were classified as ‘skeptics’. In most instances they were easily recognized. Those, in whom no determination could be made, owing to various factors, were classified as ‘indeterminate’. Commenters explicitly professing acceptance of consensus but posing relatively minor question, etc – classified as lukewamers. Entries required reading at least two different comments for almost every commenter, except in instances commenter orientation was known from prior experience. Certainly there will be errors to a degree, and subjectivity is involved. It is unavoidable that infrequent (and singleton) commenters, and those with non-unique names (‘tom’, ‘john’) are resistant to classification. Validation of method was available when blogger A.Scott arrived at similar results working independently on portions of the data.
This article was published at WUWT.
One of the main indicators of the ‘ghetto-ization’ of climate blogging is a complete lack of response to criticism one encounters. John Cook’s Skepticalscience is a prime example in this regard. These people won’t respond to criticism even if their lives depended on it.
But, on occasion, they will, If they think such criticism might reach important ears, or if they feel there might be blowback
Cook is currently in the middle of one such episode. He has had to respond to Bishop Hill revealing (via Barry Woods’ work), that he and his fellow author Lewandowsky identified Richard Betts, Chief of Climate Impacts of the Met Office UK, as a ‘conspiracist’.
How does he explain this?
The paper’s methods are quite clear on what was done.
- Authors define ‘recursive hypothesis’ – “…any potentially conspiracist ideation that pertained to the article itself or its author, unsubstantiated and potentially conspiracist allegations pertaining to the article’s methodology, intended purpose, or analysis “
- Authors use Google searches, Alexa rankings and direct site visits and gather recursive hypotheses
- Authors excerpt ‘blog posts’ that published recursive theories into a master table with, and this is key, ‘each excerpt representing a mention of the recursive theory’
Examine point#3 again, just in case. The authors claim that “all recorded instances” of recursive theories are in their supplementary data table. Betts’ qualifies under a specific conspiracist idea – ‘didn’t email deniers’.
The thread with Betts’ comment focused entirely on Lewandowsky’s data and has over one hundred comments. The table contains 9 entries for “didn’t email deniers” excerpted from the thread. All excerpted comments meet the authors’ criteria for ‘recursion’, i.e., express some judgement about Lewandowsky’s method, purpose, analysis or motive. This includes Richard Betts’ comment.
Lewandowsky and Cook now claim
- “we are certainly not claiming that [Betts] is a conspiracy theorist”
- Betts’ name being in the table “attests to the thoroughness of daily Google search”
- the supplementary table just represents “raw data”.
None of the above can be correct. It is not possible for the table to just be “raw data”, as their own description of method shows. The comment selection does not reflect on the thoroughness of Google search; rather it does on the faithful identification of comments/posts with recursive conspiracist ideas as defined. As a result, this does imply that Betts is a conspiracy theorist.
If we accept that Betts is not a ‘conspiracist theorist’, then the same would apply to other contributors found by the authors’ searches as well. The Betts comment is qualitatively no different from the others.
It would be interesting to see how Lewandowsky and his co-authors show this not to be true.
On his blog Skepticalscience, esteemed doctoral fellow John Cook writes of a commenter’s reaction to his colleague Lewandowsky’s as yet unpublished paper:
“LOG12 was fundemenatlly [sic] flawed from the start, and throughout. It offered no valuable insight or understanding as a result. It is clear to any rational outside observer it had one purpose – to be used to promote the authors advocacy of catastrophic anthropogenic global warming – and to demean and denigrate those who do not believe as he does. The fact this paper has never been published, as Lewandowsky’s repeatedly claims, confirms this finding.”
Cook laughs at this commenter as a ‘conspiracist’ for thinking non-publication confirmed how bad the paper really was.
It will be interesting to see whether this commenter resists the “Something Must Be Wrong” urge when LOG12 is published or continue to assert that the research is “a fraud”.
No, Cook. That’s not the ‘something must be wrong’ urge, that’s the ‘any serious academic would see right through this’ type of wishful thinking.
Thinking your colleague’s paper didn’t get published, because of how bad it is, is placing faith in the academic peer-review process, yet. Where one hopes reviewers and editors would see questions and criticism raised about the paper. Your commenters and critics come from a place where higher standards reside.
There are two simple, yet serious questions about his paper. Question number one: where is the ethics approval section of the paper?
Now, I might be mistaken. The section could be in the paper. After all, the paper is 57 pages long and ethics review section could be hidden someplace. On top of it, I am a ‘denier’. So I might be not seeing what’s there.
On to question number two. Why are there what appear to be fabrications and falsifications in the paper?
Again, this has to be clearly understood. People make all sorts of mistakes in research. The kind of errors that are considered serious enough to constitute scientific misconduct are hard to pin down. As a shortcut, the US NSF for instance makes the determination that any act that constitutes fabrication, falsification or plagiarism, would qualify.
The kind of usage of comment material Stephen Lewandowsky, Cook and others appear to have employed in their paper seems to fall squarely in the falsification and fabrication territory. Brandon Shollenberger’s post is published at a prominent outlet WUWT.
Shollenberger’s evidence doesn’t rely on interpretive grounds to support this conclusion: the excerpted quotes and the full quotes with their context are provided in the open.
Cook usually does not answer to criticism. But this is about a scientific publication in the public domain. Of the questions above, the second one, is serious. It requires a response.
This post by Brandon Shollenberger shows Lewandowsky and Cook to have fabricated quotes from material left by commenters on blogs. The quotes were then used to imply entirely different meanings from what the original comments intended, in their peer-reviewed climate communication paper in the prestigious Frontiers in Psychology stable.
How is this possible? The peer-review process usually has a strong track-record of trapping errors such as the above.
Lewandowsky and Cook’s paper, a draft of which is available, also lacks description of an institutional review process or approval for their study. Usually reviewers are prompt in asking for such things and rarely, if ever can anyone get past them. Even the lowliest of studies involving human subjects entails an ethics board or an institutional review board examination.
How did this happen? Maybe, there is a simple explanation. Perhaps Lewandowsky and Cook obtained institutional review via proper channels but failed to mention them in their paper.