Stephen Lewandowsky has co-authored (yet another) paper attacking climate skeptics. His colleagues-in-arms this time are long-time climate consensusite Jeff Harvey , Bart Vergheggen, and a cohort of ecologists along with Michael Mann. First author Harvey is well-known to climate commenters as a rant-prone passionate bulldog for the climate cause.
The main supposed finding of the paper is that zoologist Susan Crockford is the source of a number of skeptical blog posts. Harvey and colleagues claim a large figure (80%). The authors then claim to identify a ‘majority-view’ position in the polar bear literature, which they say is diametrically opposite of the Crockford-based blog position/s.
Polar bear alarmism has a checquered history and scientists Ian Stirling, Steven Amstrup and Andrew Derocher have been prominent proponents. All three have made several statements pushing a specific line – that polar bears are under severe threat, that anthropogenic global warming is the cause, and that their ability to adapt to changing conditions is limited. Of note here, the paper is co-authored by Ian Stirling and Steven Amstrup. Susan Crockford has been critical of both scientists on her blog and other venues.
My first thought was on seeing the Harvey et al text was whether the so-called ‘majority-view’ papers mainly cited Stirling, Amstrup and Derocher papers in support of their views. Did they identify a view present in the literature which traced its antecedents to their own papers?
It turns out the situation is much worse.
Of the 92 papers included in the study, 6 are labeled ‘controversial.’ Of the remaining 86, 60 are authored or co-authored by Stirling or Amstrup, or Derocher. That is, close to 70% (69.76%) of the so-called ‘majority-view’ papers are from just three people, 2 of whom wrote the attack paper themselves.
In other words, Stirling and Amstrup did not discern an organically coalesced body of opinion from several polar bear papers by sifting through the literature. They did not even uncover a body of literature supporting a particular stance that cited their own work, as self-referential as that might have been. They ‘found’ their own papers to constitue a ‘majority-view’ in the polar bear literature!
Stirling and Amstrup attack Susan Crockford for not following the ‘majority-view’ and the ‘majority-view’ is what’s expressed in their own papers.
But there’s worse to come. The authors list 6 papers as being ‘controversial’ for eliciting ‘critical comments and discussion in the peer-reviewed literature.’ It turns out Stirling, Amstrup and Derocher themselves wrote comments to 4 out of 6 of these papers. Put another way, Stirling and Amstrup labeled papers they did not like ‘controversial.’
It is no wonder the ‘majority-view’ (green triangles above) displays such a tight cluster of perspectival homogeneity. It is not a majority view but rather a minority one, of just three scientists. The near-absolute lack of variability in opinion along the PC1 axis is likely just due to standard boilerplate alarmist text in the papers of Stirling, Amstrup and Derocher, repeating the mantra of polar bear doom from melting ice, rather than any emergent phenomenon in polar bear literature.
A true majority view (if there can be such a thing) can be discerned only if a representative sampling of the polar bear literature is carefully assessed, with attention to their scientific content (as opposed to mere headcount), the nature and strength of supporting evidence presented and the caveats that scientists are careful enough to always include. In such a setting opposing viewpoints cannot be dismissed as being controversial merely because they oppose one’s own views.
The paper has several hallmark characteristics of a Lewandowsky piece: the language is dominated by ad hominen attack (for e.g, the word denier occurs 31 times) and the text is notable for a number of false statements. The authors purport to analyse ‘the views’ of blogs but ascribe views to the blogs themselves followed by analysis of the same views. Last but not the least, the full data from the paper is not made available. But the fatal flaw of non-independent analysis by the paper’s authors renders its conclusions invalid.
The now-withdrawn Lewandowsky Fury paper (link) is possibly one of the egregious examples of ethically compromised research encountered. Delve into the paper, the first thing crossing one’s mind is: how did the university ethics committee approve this project? This was the study protocol – Lewandowsky’s associates would carry out real-time surveillance on people criticizing his paper, prod and provoke them, record their responses and perform ‘analysis’. How did they say yes?
Lewandowsky’s correspondence with University of Western Australia (UWA) officials has been released (link). Amidst a storm of emails on this previous work, he writes to the secretary of the ethics committee (10 Sep 2012) of his intention to start another project:
This is just to inform you of the fact that I will be writing a follow-up paper to the one that just caused this enormous stir. The follow-up paper will analyze the response to my first paper …
Lewandowsky states there will be no interaction with his subjects: none of the research “will involve experimentation, surveys, questionnaires or a direct approach of participants of any sort“. (emphasis mine)
What would the research be? According to Lewandowsky, his team would “analyz[e] “Google trends” and other indicators of content that are already in the public domain (e.g. blog posts, newspapers, comments on blogs, that type of thing)”. The research would “basically just summarize and provide a timeline of the public’s response.”
The email is a remarkably misleading and limited description of the project he and his associates conducted.
The ethics office response is further divorced from reality. The approval was granted as a “follow-up” study to the ‘Moon’ paper. The ‘Moon Hoax’ paper was itself was approved under an application for “Understanding Statistical Trends”. As recounted here, “Understanding Statistical Trends” was a study where Lewandowsky’s associates showed a graph to shopping mall visitors and asked questions (link pdf). This application was modified to add the ‘Moon hoax’ questions on the day the original paper was accepted for publication. The same application was modified for the ‘Recursive Fury’ paper. Each modification introduced ethical considerations not present in the previous step. Nevertheless, three unrelated research projects were allowed to be stacked on to a single ethics approval by the university board. In this way, Lewandowsky was able to carry out covert observational activities on members of the general public, as they reacted to his own work, with no human research ethical oversight.
Lewandowsky pitches his study proposal as non-intrusive, observational and retrospective in design: there is “no human participation”, the “content is already in the public domain”, and “irrespective of whether we then summarize that activity”. What he implied was there was minimal concern for more elaborate safeguards and vetting usually put in place when working with human subjects.
Yet during the period of study, Lewandowsky was in direct conversation with his study subjects (even as he ostensibly observed them). On a posting spree, he wrote 9 articles at shapingtomorrowsworld.org between Sept 3 – 19, 2012. About half of these were written after he approached the ethics office on the 10th. All but two were written after he announced that he was already collecting data, to the university deputy vice chancellor on the 5th. Among individuals named in the paper as harboring conspiracist ideas, three posted detailed comments with multiple questions responding to these posts, on his website. The subjects wrote numerous posts at their own blogs on Lewandowsky’s actions in the same interval. The flow of comments, appearance and final content were influenced by the second author, John Cook. A team headed by Cook operated as moderators at shapingtomorrowsworld.org, deleting parts, or whole comments offered by the subjects in the same interval. The elicited comments and posts were harvested as data for the paper.
The study was thus not an examination of archived material on blogs. As the authors themselves describe, they recorded subject comments and blog-posts in “real-time”, responses occurring to events set in motion by themselves. It cannot be considered a observational study either as authors interacted with the purported subjects during the period of study.
In her reply, the ethics secretary directs Lewandowsky to the UWA Human Subjects research web page (link). The page contains a ‘risk assessment checklist’ to guide researchers to whether a planned study would need ethics approval. It has these questions:
- Active concealment of information from participants and/or planned deception of participants
- Will participants be quoted or be identifiable, either directly or indirectly, in reporting of the research?
- Will data that can identify an individual (or be used to re-identify an individual) be obtained from databanks, databases, tissue banks or other similar data sources?
- Might the research procedures cause participants psychological or emotional distress?
- Does the research involve covert observation?
The answer is a ‘Yes’ to many of these questions. ‘Participants’ declared to be conspiratorial by Lewandowsky are directly identified by name in the paper. The element of covert observation is undeniable.
The possibility of ethical breaches with internet-based research are well-understood. Clare Madge (2007) observed ethically questionable research could come to be carried out “under the radar screens of ethics committees” simply owing to the ease and speed of internet-based research resulting in ‘shoddy cowboy research’ and proliferation of ethical misconduct. The study design and conduct of the Lewandowsky et al 2013 ‘Recursive Fury’ contains numerous ethical failures. Lewandowsky’s email characterized his work in terms which turned out to be their opposite. There was no formal application and there was no review and consequently the prospective,non-observational nature of his project went unscrutinized.
Madge C. Developing a geographers’ agenda for online research ethics Prog Hum Geogr Oct 2007; 31(5): 654-674
Steve McIntyre has a post on the Lewandowsky affair. It is a key one and a summary might be useful.
When questioned how he reported on skeptics in the Moon paper without surveying them, Lewandowsky said he had asked skeptics in 2010 to host the survey. He didn’t say who they were.
This came as a surprise. Searches showed no messages from Lewandowsky. Several skeptic bloggers reported no receipt. Subsequently, others fished out the survey emails. It was realized they were sent under assistant Charles Hanich’s name.The bloggers contacted each other and dug up the emails rapidly.
This was summarized on Jo Nova’s blog and other venues on a running basis.
A day before this, a post appeared on the Shaping Tomorrow’s World blog. In it, Lewandowsky posted names of the sceptical bloggers whom he sent survey requests to.
Steve McIntyre shows evidence Lewandowsky published the post after skeptics announced receipt of survey emails but backdated it. This would make it appear as though his post contributed to the bloggers finding the survey emails.
The Lewandowsky group rely on this chronology. The Fury paper states the names of the bloggers “…became publicly available on 10th September 2012, on a blog post by the first author of LOG12”. ‘LOG12’s first author is Lewandowsky.
Except, according to McIntyre’s analysis, the post was actually published on the 11th of September and not the 10th, but made to appear so.
The ‘Moon and Fury’ saga is now no more in the realm of a science debate. There are three occasions involving Lewandowsky and his group where they have been directly confronted:
- false representation involving quoted material in the suspended ‘Fury’ paper
- non-posting of survey at skepticalscience.com, and yet making calculations on its basis, in the ‘Moon’ paper
- apparent fabrication of blog dates and use of alleged backdated material to make claims in the ‘Fury’ paper
¹as of this writing.
The Cook co-authored presentation includes a slide about a Centers for Disease Control (CDC) flyer on flu vaccines. It claims that attempting to correct a ‘myth’ can familiarize readers to the myth.
To illustrate, Cook and co-authors show the flyer which lists ‘myths’ about influenza vaccines and provides correct information beneath each item. Item #3 is “The side effects are worse than the flu”, which is countered by “The worst side effect you’re likely to get with injectable vaccine is a sore arm”
Cook and authors believe this approach is wrong. Instead, they insist that only correct information (‘facts’) be provided. The authors advocate saying: “–The vaccine is safe! The worst side effect would be a sore arm.”
This is outright false information. There are several adverse effects associated with an injectable flu vaccine, the least harmful of which is a sore arm. Far worse adverse events are possible, ranging from fever to severe allergic reactions. Fortunately, these are rare. What vaccinees need is to be informed in an appropriate manner about the probabilities of these events.
Cook gets rid of key phrases, adds new words and an exclamation point to the CDC’s careful language. An optimistic statement about probabilities has been mutated into a false categorical declaration about absolute risk from a vaccine. What one hears now sounds like a burger sales pitch by a marketing team. It is the Susan Joy Hassol brand of ‘science communication’.
The live-attenuated vaccine administered as a nasal spray can result in a mild flu-like illness. In a given season, it is quite possible and likely that individuals note their mild symptoms following the vaccine and contrast them with unvaccinated members who show no visible illness. The origin of the misconception likely lies in this specific context. The CDC text addresses these points.
Cook’s approach, instead, is to label the whole thing as just a ‘myth’.
Yeah. That should clear up all confusion.
Postscript: Cook and Lewandowsky’s conclusion that repeating a so-called myth will reinforce it, comes from their colleague Nobert Schwarz’s presumptuous “Skurnik, I., Yoon, C., & Schwarz, N. (2007). Education about flu can reduce intentions to get a vaccination”. We learn, from a Schwarz review article in Journal of Experimental Social Psychology, that Skurnik et al showed subjects two leaflets one being the CDC flyer and the other a simple ‘facts’ sheet. Those who saw the CDC flyer misremembered more myths as facts on a true/false questionnaire, after a time delay. How such a study design can reflect either the intentions or the true state of knowledge of potential vaccine recipients than rather the test-taking strategies of experimental subjects is beyond me.
Incidentally, ‘Skurnik, Yoon and Schwarz (2007)’ is…unpublished.
Consider what Cook does on his website. He takes a purported ‘myth’ which is usually a caricatural simplification of an original question and start off confidently pretending that there is a clear-cut refutation. The refutation is constituted by an answer that is often over-simplified to the point of falseness. When all messy questions that arise from reality are ‘myths’, all answers are simple.
Cook should keep his shenanigans to climate and stop spreading false information on vaccines and matters of public health and safety.