«Edited and Annotated by John Costella The Lavoisier Group March 2010 About the Author John Costella was born in East Melbourne in 1966. After being ...»
Despite Karl completely agreeing with his butchering of the language, Schneider is concerned that Karl’s term is still not alarmist enough. His response reminds one of
Sir Humphrey in Yes, Minister:
Great Tom, I think we are converging to much clearer meanings across various cultures here. Please get the “inconclusive” out! By the way, “possible” still has some logical issues as it is true for very large or very small probabilities in principle, but if you define it clearly it is probably OK—but “quite possible” conveys medium confidence better—but then why not use “medium confidence”, as the 3 rounds of review over the guidance paper concluded after going through exactly the kinds of discussions we’re having now?
Indeed, if they continued this farce for long enough, they would eventually conclude that they may as well say that it is “overwhelmingly likely”! Remember, we are here talking about a scenario that—even according to their own calculations—was just as likely to be wrong as it was right!
September 11, 2000: email 0968705882 Filippo Giorgi, Senior Scientist and Head of the Physics of Weather and Climate Section of The Abdus Salam International Centre for Theoretical Physics in Trieste, Italy, writes to the other Lead Authors of Chapter 10 of the latest IPCC Report. In his
first paragraph he makes a comment that I will return to below:
We said that one thing to look at was the agreement with the old data and thus I noticed that relaxing the criteria determining what “agreement” means, would yield a greater agreement.
He then details his serious concerns about how the IPCC Report is being drafted:
First let me say that in general, as my own opinion, I feel rather uncomfortable about using not only unpublished but also un-reviewed material as the backbone of our conclusions (or any conclusions). I realize that Chapter 9 of the Report is including new stuff, and thus we can and need to do that too, but the fact is that in doing so the rules of the IPCC have been softened to the point that in this way the IPCC is not any more an assessment of published science (which is its proclaimed goal), but the production of results. The softened condition that the models themselves have to be published does not even apply, because the Japanese model, for example, is very different from the published one which gave results not even close to the actual … version …. Essentially, I feel that at this point there are very little rules and almost anything goes. I think this will set a dangerous precedent which might undermine the IPCC’s credibility, and I am a bit uncomfortable that now nearly everybody seems to think that it is just OK to do this. Anyway, this is only my opinion, for what it is worth.
Further on in the email, he describes the criterion for determining that models “agree”:
1) Do we soften our requirement, i.e. from “all the models except one need to agree with each other” to “all the models except two need to agree with each other” agreement? I do not feel strongly about it but am more in favor of not softening the criterion. We are looking for confidence and model agreement and should have stringent requirements on it.
In other words—to fill in the missing emails, that we do not have—what has happened is the following: The scientists previously decided that they would accept that all the models “agree” if either all of them agree with each other, or all but one of them agree with each other. But in preparing their Chapter for the Report, they found that two of the models did not agree with the others. Thus it has been suggested that they now “move the goalposts”—after the event—to redefine “agreement” so that two of the models be allowed disagree with the others!
In fact, this entire “criterion for agreement” is absolute nonsense in the first place, flying in the face of the most elementary principles of statistics, as I will discuss shortly.
But even ignoring that, the idea that they can avoid the “inconvenient truth” of their results by moving the goalposts after the fact is, in and of itself, serious scientific fraud.
To his great credit, Filippo is arguing against this form of subterfuge.
2) Do we include the data that disagree in the analysis? I say yes, not having time for more detailed analysis as to why they should not be included. In Chapter 9 of the Report they are presented as bracketing the answers, not as being wrong. This is the problem of not having published research on this: perhaps a paper would have excluded them on scientific grounds, but can we, at this point? I am not sure we can have solid enough foundations to legitimate it. Besides, I have done the analysis without them as well, and things did not change almost at all.
To any scientist with even a rudimentary knowledge of statistics, this paragraph shows that the entire IPCC had absolutely no idea what they were doing. Data that disagree with the other data (“outliers” in the jargon of mathematics, although I will not use that term here) are of critical importance: understanding them is key to understanding what your data is telling you; they provide the ultimate “reality check” that you really know what you are doing. They are not “wrong”, as these “scientists” are suggesting; they should not simply be omitted. Nor should they simply be presented as “bracketing” the data that do agree.
Again, Giorgi’s comments are a credit to his wisdom and integrity: he urges— correctly—that if there is no valid reason to exclude the data, then it must be presented as it stands. He is also arguing against using unpublished (and thus not peer-reviewed) results, particularly from models such as the Japanese model that have not been published, and disagree with the results of models that have been published.
September 12, 2000: email 0968774000 Following on from the previous email, Filippo Giorgi writes to the various Lead Authors, having obtained at least partial agreement with his arguments. He reiterates
I myself think that material for a document as important as the IPCC’s Third Assessment Report cannot be drawn from last-minute barely qualitychecked and un-peer-reviewed material (people have barely looked at the Max Planck Institute run that was completed last Friday!).
September 14, 2000: email 0968941827 Recall the discussion above about the criterion used to determine if a set of models “agreed” with each other.
Hans von Storch argues against moving the goalposts:
I have already indicated that I fav[or] the “all models but one have to agree” version. Obviously, this choice of criterion is arbitrary, but it was made before we did the analysis. By changing the criterion after we have seen the data, we may be targeted by critics for biased rules. Using material which is unpublished and unreviewed is already a bit shaky (Hans Oerlemans is unwilling to participate in the IPCC process because of a similar incident in the 1995 report!).
Peter Whetton argues that the criterion is now too stringent, because it gives them less chance of getting “agreement” purely by luck! He points out that the criterion was previously only used for five models, for which … agreement … could be expected 37% of the time just by chance …. With nine models the equivalent figure for “all models but one have to agree” is only 3.5%, and it is still much lower for “all models but two have to agree” (18%)… (assuming that my somewhat rusty probability calculations are correct). It really depends on what we had understood the purpose of the criterion to be. I am not certain how much this was discussed.
As noted above, standard scientific practice is to ensure that the chances of getting agreement purely by luck is less than some percentage, often 5 per cent. To argue that the criterion is too strong because the chance of such a “false (lucky) positive” is only
3.5 per cent—and that the previous situation of allowing a 37 per cent chance of false positive is far preferable—is simply astounding: it shows a very poor understanding of the fundamental principles of statistics.
But more astonishing is Whetton’s lack of confidence in performing an elementary calculation in probability theory that 16-year-old high school students routinely calculate every day! It would be equivalent to Tiger Woods expressing a lack of confidence in his ability to decide which wood he should use for a particular hole … September 22, 2000: email 0969618170 Tom Crowley of the Department of Oceanography at the Texas A&M University writes to Malcolm Hughes and Keith Briffa, about the huge problems involved in trying to figure out if the various “temperature proxies” are measuring temperatures,
carbon dioxide levels, or some other complicated combination:
As I discuss in my … paper the “anomalous” late 19th century warming also occurs in a … tree ring record from central Colorado, the Urals record of Keith Briffa, and the east China … temperature record of Zhu.
Alpine glaciers also started to retreat in many regions around 1850, with one-third to one-half of their full retreat occurring before the warming that commenced about 1920.
… So, are you sure that some carbon dioxide effect is responsible for this?
May we not actually be seeing a warming?
Malcolm Hughes’s response exemplifies the utter confusion of these researchers:
I tried to imply in my e-mail, but will now say it directly, that although a direct carbon dioxide effect is still the best candidate to explain this effect, it is far from proven. In any case, the relevant point is that there is no meaningful correlation with local temperature.
Why should these topics be so dangerous to write explicitly that they must be implied?
In the mathematical jargon I have omitted from this email—and many others—these “scientists” explain that sometimes the “proxies” they are using (tree rings, etc.) seem to measure temperature, and sometimes they don’t (the extremely blunt, simplistic and naive mathematical test that they use to determine this—something so simple that it is used every day by high school students—is called “correlation”).
What they do is “cherry pick” those proxies that seem to give the “right” answers, and ignore those that don’t. That’s not just bad science: it’s completely wrong.
Hughes’s next comment exemplifies this “cherry picking”:
I am confident that, before 1850, they do contain a record of temperatures changing over decades. I am equally confident that, after that date, they are recording something else.
And, at the end of the day, that’s what this “cherry picking” is based on: the gut feeling of these scientists.
February 27, 2001: email 0983286849 Phil Jones is upset that Julia Uppenbrink, the Editor at Science, did not send a piece to
them to review, which would have allowed them to block it:
Obviously this isn’t great as none of us got to review it. Odd that she didn’t send it to one of us here as she knew we were writing the article she asked us to!
It is noteworthy that these scientists have assumed that every single article published in Science relating to climate science in any way, would automatically be sent to them for approval or otherwise.
March 2, 2001: email 0983566497 Chick Keller, of the Institute of Geophysics and Planetary Physics at the University of California at San Diego, United States, writes to Mike Mann, Ray Bradley, Phil Jones, Keith Briffa, Tom Crowley, Jonathan Overpeck, Tom Wigley, and Mike MacCracken, pointing out problems in the historical temperature estimates obtained from individual “proxy” methods:
Anyone looking at the records gets the impression that the temperature variation for many individual records or sites over the past 1000 years or so is often larger than 1° Celsius. … And they see this as evidence that the 0.8° Celsius or so temperature rise in the 20th century is not all that special.
He then makes note of a trick that they have used to mask this effect: