Susan Fiske, a professor of psychology at Princeton University recently wrote a longish rant about critics of research who operate out of social media platforms and blogs. This has people like Andrew Gelman and neuroskeptic quite excited (see their respective articles here and here)
Why am I dinging these excellent bloggers, you ask? After all, their response to Fiske who accuses bloggers of ‘methodological terrorism’ and as being ‘destructo-critics’ contains a number of valid points.
Both Gelman and neuroskeptic have seen the work of Stephen Lewandowsky. They had every opportunity to examine his work thoroughly and reach their conclusions. Both chose to stand in support of Lewandowsky.
Gelman has not commented on Lewandowsky’s papers directly but nonchalantly promoted one of his opinion pieces with Dorothy Bishop. The paper was on methods of efficient gate-keeping and preventing methodological terrorists (bloggers) from getting published . neuroskeptic’s attitude toward Lewandowsky’s methods can only be described as a form of aggressive ignorance, as he promoted the now-retracted ‘Recursive Fury’ paper, which retailed labeling the same methodological terrorists as ‘conspiracists’.
In other words, what mattered was who was being disparaged and labeled. When the critics were climate skeptics, they’ were ok. When it appears as though Fiske’s open-ended vague attack might include bloggers like them, they are unhappy.
This simply shows acceptance or serious consideration of criticism of scientific results depends not merely on the validity of the points being made, but on the social context, the packaging and the channels through which it arrives. Gelman and neuroskeptic are no more immune to wild rants, ideological blind-spots and irrational thinking against ‘methodological terrorism’ than Susan Fiske is.
They would do good to stop preaching to her.
In 2012, Stephen Lewandowsky and co-authors submitted a paper to the journal Psychological Science, generating widespread publicity. Here, I address a simple issue/question that has hovered around the paper from the time it made its appearance. The issue is at the heart of Lewandowsky’s first ‘Moon Hoax’ paper and the in-limbo second paper in Frontiers in Psychology.
The ‘Moon Hoax’ paper (a.k.a LOG12, LOG13 etc) draws a number of conclusions about climate skeptics (called ‘deniers’). A major portion of the data and analysis is devoted to ‘rejection of climate science’. The paper’s title advertises its findings about ‘deniers’.
So the question is: how did Lewandowsky and co-authors study climate skeptics?
The paper draft (pdf) stated simply that authors ‘approached’ 5 skeptic blogs to post a survey, but ‘none did’. This led to a hunt to find who exactly these bloggers were (Lewandowsky wouldn’t tell). Lewandowsky spread significant amounts of distraction and smoke on the matter, raising hue and cry that he did email skeptical bloggers:
First out of the gate was the accusation that I might not have contacted the 5 “skeptic” bloggers, none of whom posted links to my survey. Astute readers might wonder why I would mention this in the Method section, if I hadn’t contacted anyone.
What matters however, is not whether or not Lewandowsky contacted skeptics but what came of such contact. The whole point of contacting the bloggers was to get surveys posted on their websites to ensure skeptic participation. This never took place. Through the noise, the question of non-sampling of skeptics remained unresolved‡.
As a way of providing answer, the paper itself appeared in final form about a month back. When examined, the authors appear to have settled on a remarkable method of addressing the defect. In the supplementary information, Lewandowsky et al (LOG13) make a startling claim. They state the blogs that did carry their survey have a broad readership ‘as evidenced by the comment streams’:
All of the blogs that carried the link to the survey broadly endorsed the scientific consensus on climate change. As evidenced by the comment streams, however, their readership was broad and encompassed a wide range of view on climate change.
The authors claim to have analysed reader comments at one venue to determine this. They state:
To illustrate, a content analysis of 1067 comments from unique visitors to http://www.skepticalscience.com, conducted by the proprietor of the blog, revealed that around 20% (N = 222) held clearly “skeptical” views, with the remainder (N = 845) endorsing the scientific consensus.
Extrapolating, the authors infer further that close to eighty-thousand skeptics saw Lewandowsky’s survey on Skepticalscience alone (see below). Owing to such broad readership, enough skeptics are said to have been exposed to the survey.
Readers of climate blogs will at once see several things that are off. However, these are the assertions forming the basis on which Lewandowsky et al 2013 rests.
To start, the authors’ premises are accepted. It is deemed that comment streams can be analysed to determine whether a blog has a broad readership, or a more polarized one.
Comments on six blogs where Lewandowsky et al’s survey was posted were analysed. Commenter names and comment counts were obtained from web pages using R scripts. Following the authors’ method, this was carried out for the entire month the survey was posted. For each blog, duplicates were removed.
Commenters were classified as (a) skeptic, (b) ‘warmist’ (c) ‘non-skeptic’ (d) lukewarmer, (e) neutral, or (e) indeterminate. Regulars whose orientations are familiar (e.g., dana1981 – ‘warmist’) were tagged first. Those with insufficient information to classify, and infrequent posters with singleton comments were tagged ‘indeterminate’†.
The results are presented below. A total of 614 commenters contributed 4976 comments to six blogs in the month the survey was posted (range: 2 – 2387 comments/blog). An estimated 111 commenters posted across blogs, with 504 unique commenter aliases from all blogs.
The results show a skewed commenter profile. As a whole, there are 59 skeptical commenters, amounting to about 9.5% of total. Individually, skeptics range from 5-11% of commenters between blogs, with one venue (Hot Topic) showing 19% skeptics. Closer examination shows this to be made up by just 10 commenters. Non-skeptics are close to 80%, i.e., 480 of 614. Neutral posters are 9%, and indeterminate 3%. Of the 59, more than half are from comments posted at one blog (Deltoid).
The same pattern can been seen to repeat by blog:
The marked difference in comment number between the blogs obscures underlying similarities. When commenter proportions are made equal, these become plain:
From the data above it is evident these blogs are not places where readership is “broad” or encompasses a wide range of views on climate. To the contrary, these are highly polarized, partisan blogs serving their cliques. One half of the blogs hosted comments from all of 6 skeptical commenters in total (Scott Mandia, A Few Things Ill Considered, and Bickmore’s Climate Asylum).
The non-surveyed Skepticalscience.com
What about Skepticalscience’s comment stream? Lewandowsky et al state that John Cook at his website analyzed 1067 comments to identify 222 skeptics and use this to buttress claims of broad readership in survey blogs. One wonders how Cook got the fantastic figures! When commenters for Sept 2010 are analysed, there are 36 skeptical voices of a total 286. Cook’s estimates are inflated six times over. In reality skeptics form 12.58% of commenters for that month, and a mere 0.03 fraction of John Cook’s 1067 unique commenters. These results verify with independent analysis performed by A.Scott.
Furthermore, close to 90% of commenting viewers are not skeptics. Contrary to Lewandowsky et al, Skepticalscience is not a place where readership is “broad and encompasses a wide range of view on climate”. In fact Skepticalscience exactly matches Deltoid, a virulently anti-skeptic website, in commenter profile.
Importantly however, John Cook never posted the survey at Skepticalscience (see here and here). In the face of this false claim, the authors’ post-hoc exercise of computing skeptic exposure becomes counterfeit.
How would the picture have been had Lewandowsky et al actually obtained survey exposure with a skeptical audience? As a comparative exercise, I pulled comment counts from widely read skeptical blogs Wattsupwiththat, Bishop Hill, Joanne Nova and Climate Audit for the same period. Traffic figures provided by Anthony Watts indicate close to 3 million visits in August 2010. The results ought to be eye-opening:
A number of things can now be confirmed. The authors of Lewandowsky et al 2013 did not survey skeptical blogs. The websites that carried the survey have neither a broad readership, nor represented skeptical readers and commenters. The authors did not survey any readers at the website Skepticalscience, but represent their data and findings as though they did. Lastly, the authors’ calculations in assessing survey exposure, which they base on the same Skepticalscience, are shown to be wrong.
With the above, conclusions drawn about skeptics by Lewandowsky et al by sampling a population of readers and commenters who are not skeptic can be termed invalid. At best the study’s skeptic-related analysis is meaningless, arising from non-representative sampling. At worst the possibility of false conclusions owing to flawed survey exposure arises. The above data combined with Lewandowsky et al 2013 survey results, in fact, show one possible outcome of displaying loaded questions relating to climate skeptics to a non-skeptical audience. Conclusions about non-skeptical ‘pro-science’ commenters and their psychology are probably more appropriate.
‡ The list of surveyed blogs (from Lewandowsky et al 2013 SI):
Skepticalscience – http://www.skepticalscience.com
Tamino – Open Mind http://tamino.wordpress.com
Climate Asylum – http://bbickmore.wordpress.com
Climate change task force – http://www.trunity.net/uuuno/blogs/
A few things ill considered – http://scienceblogs.com/illconsidered/
Global Warming: Man or Myth? – http://profmandia.wordpress.com/
Deltoid – http://scienceblogs.com/deltoid/
Hot Topic – http://hot-topic.co.nz/
Note that (a) there is no record of Skepticalscience having posted the survey, and (b) the Climate Change Task Force entry is available on the Waybackmachine (for e.g., here)
† Batch Google searches (e.g., http://google.siliconglobe.co.uk/) and keyword searches on scraped HTML blog posts were used to search for commenter output. Multiple entries were frequently required for each commenter to be satisfactorily classified. Wherever possible (which was so in almost all instances), results during August and Sept 2010 were employed. Comments supportive of consensus, critical of ‘deniers’ and ‘skeptics’ and/or unequivocally appreciative of article (e.g., “great post, now I can use this in my arguments with deniers”) were classified as coming from ‘warmists’. Comments approving of main thrust of a ‘warmist’ blog post, but with no further information available were classified as ‘ns’ – not skeptic. Commenters questioning basic premises of blog post, being addressed to by ‘denier’, ‘denial’ etc, whose stance could be verified by similar mode of behaviour in other threads, were classified as ‘skeptics’. In most instances they were easily recognized. Those, in whom no determination could be made, owing to various factors, were classified as ‘indeterminate’. Commenters explicitly professing acceptance of consensus but posing relatively minor question, etc – classified as lukewamers. Entries required reading at least two different comments for almost every commenter, except in instances commenter orientation was known from prior experience. Certainly there will be errors to a degree, and subjectivity is involved. It is unavoidable that infrequent (and singleton) commenters, and those with non-unique names (‘tom’, ‘john’) are resistant to classification. Validation of method was available when blogger A.Scott arrived at similar results working independently on portions of the data.
This article was published at WUWT.
There are two simple, yet serious questions about his paper. Question number one: where is the ethics approval section of the paper?
Now, I might be mistaken. The section could be in the paper. After all, the paper is 57 pages long and ethics review section could be hidden someplace. On top of it, I am a ‘denier’. So I might be not seeing what’s there.
On to question number two. Why are there what appear to be fabrications and falsifications in the paper?
Again, this has to be clearly understood. People make all sorts of mistakes in research. The kind of errors that are considered serious enough to constitute scientific misconduct are hard to pin down. As a shortcut, the US NSF for instance makes the determination that any act that constitutes fabrication, falsification or plagiarism, would qualify.
The kind of usage of comment material Stephen Lewandowsky, Cook and others appear to have employed in their paper seems to fall squarely in the falsification and fabrication territory. Brandon Shollenberger’s post is published at a prominent outlet WUWT.
Shollenberger’s evidence doesn’t rely on interpretive grounds to support this conclusion: the excerpted quotes and the full quotes with their context are provided in the open.
Cook usually does not answer to criticism. But this is about a scientific publication in the public domain. Of the questions above, the second one, is serious. It requires a response.