Some sea temp stuff
arry Lu (20:11:48) :

” George E. Smith (18:08:33) :
Bob you are so charitable. LWIR warms the top few cm. I figure that atmospheric (tropospheric anyway) LWIR can hardly be significant below about 3-4 microns…; so lets be generous and say it might warm the top 10 microns. How much of that energy remains following the prompt evaporation from that hot skin.”

Have you not forgotten conduction? It operates in all directions!

So we have the top few cm heated by sw and lw and a few 10s meters down heated by UV

So the surface cm is absorbing a percentage of the SW (as does each cm of the deeper water except the percentage is of a progressively smaller maximum) plus all the LW re-radiated from GHGs.

The surface is also receiving LW from the layer under the surface and radiating LW down to this lower layer. Because the surface is hotter this will average out to an energy transfer downwards.

So the hotter the surface the less the lower water energy will be radiated (lost) into the atmosphere. Less loss with the same SW TSI heating the lower layers will mean a hotter temperature.

Of course the surface is loosing heat via conduction in all directions radiation in all direction, and forced air convection upwards (sideways!)

However, The surface layer heating must effect the lower layer cooling in my books.

According to your diagram of energy buget:

Only 169 w/m^2 of SW radiation gets absorbed (198w/m^2 hits the ground)
The back radiation from GHGs is 321 w/m^2 absorbed by the ground.

If 321W/m^2 is absorbed in the top layer and 169w/m^2 is absorbed in 10s meters the the top layer will be much warmer than the lower layers.

So is it not true that this top layer must control the temperature of the lower layers?


oh dear (updated 2010/06/16)

Where it started:
wattsupwiththat (16:51:06) :
You know, this REALLY chaps my hide, especially since I’m doing a lot of green things myself, and have made major green contributions in the past.

A few examples:

A drop of one or two degrees would devastate vast areas of food production on Canada, Northern Europe and Russia. An increase will do (has done) wonders for production. I can’t speak with authority on warmer climes, but would hazard a guess that a shift of one or two degrees in a warm climate will have less of an impact than a shift

It’s clear that you just don’t get it, thefordprefect: a somewhat warmer, more balmy and pleasant climate is more desirable than what we have now; while colder temperatures will certainly kill people.

If AGW really is a crisis, their is close to nothing you can do about it, and nothing at all that wouldn’t involve massive reduction in energy use and quality of life decline to go along with it. I will feel proud when I look at my children ... that I fought to keep them free and living in a world were their quality of life is at least as good as mine was if not better because I stopped self righteous zealots like yourself from reversing the industrial revolution. Good day sir.

I personally believe that the burning of fossil fuels such as coal are one of the only real things that mankind has done to benefit the earth as a whole. The total amount of available carbon and biomass on the surface of the earth has shrunk dramatically sin

You’ve lumped everyone into your world view and hurled a faux pau of major proportions.

Possibly true! I should have targetted better.

So “thefordprefect” whoever the hell you are (just another anonymous coward making unsubstantiated claims)

No! I sent you a private email with my full name warning about allowing comments that could be considered defamatory by those attacked (the comments could affect their ability to earn, and it would be difficult for you to prove that they truly were trying to defraud). By remaining anonymous as thefordprefect your contributors are welcome to defame me as much as they like! I am unknown!

This is the address section of the email:
To: info@surfacestations.org
Subject: FAO Anthony watts wattsupwiththat - be careful
From: M. xxxxxxx
Date: 01 March 2009 14:40:12

Also most of the statements I have made have been backed up with data. My accusations are in response to insults hurled my and others' way.

let me make this clear: apologize for your broad generalization as “we don’t care about the planet” or get the hell off my blog.

This was a general impression I got from some of the responses. It was aimed at them. I had already read of your low energy home.

I’m not interested in debating the issue, I’m not interested in your excuses. I’m not interested if you are offended. Apologize contritely or leave, those are the terms.

Do what you will - my appology would only be for not tagetting my comment. There has been no possibility of any enlightening debate from the responses I got (although this last comment seems to have drawn some sensible response).

I truly hope I (and others) are wrong about AGW. I truly fear that I am not.

Anthony Watts - ItWorks (*****@itworks.com)
Sent: Sat 3/14/09 6:20 AM
To: *******@hotmail.com

All you have to do is post an apology an move on. It's real easy. I'll even compose a sample for you.

"I'm sorry that I made a generalization that assumes all posters here don't care about our environment. I will be more careful with my words in the future."

Or something similar. If you don't wish to you are certainly not obligated, but you won't be posting anymore without such an apology.

Your previous multi-level reply won't fly. I have no record of any previous email from you, this email address is what you listed in your comment form.

Anthony Watts

bill (12:00:53) : Your comment is awaiting moderation

bill (10:40:11) :
REPLY: Get your own blog then, but please don’t tell me how to run mine. I’ll post as many threads as I wish. And where’s your data citation link? Shifted and spliced data? Prove that’s valid. And if you really want to be taken seriously, drop the “galactic hero” meme and come clean with your full name. No need to hide. -Anthony

Anthony you castigated me for posting the same message on the sticky smoking gun threads, I was trying to say that if you start similar threads with the same theme and no different data, then I was requesting to post the same messages on both.

If I recall correctly someone (a “warmist”) on McIntyres blog real name was exposed leading to an event (I missed what it was) that forced the whole thread to be deleted. If I post garbage or wisdom on a topic It should not be made more/less acceptable because of my real status.
Looking at some of the comments made by CRU and other scientists about “nasty” emails they have received I prefer the safe option. My real name would enable google to provide home address (=business), phone, and private email.

As for references
Angmagssalik 65.6 N 37.6 W 431043600000 rural area 1895 – 2009
“Raw” data

The ice core data is from the reference in the header

I said It was a rough tack of the instrumental record onto the plot. Is there another way of doing this with so little data available?
To only show data to 1850s and say there is no massive rise in temp in the 21st C is disingenuous

Row per year to month per row

Here is some rough excel code. It works.
NOTE that the excel sheet must be saved as a .XLSM macro enabled worksheet and macros will have to be enabled.

click developer tab
type a new macro name
e. g. YearPerRowToClmn

Assign to keybord if you like

click [create]
then between:

Sub YearPerRowToClmn()

End Sub

paste this (but not the “———”:

ActiveCell.Offset(1, 0).Range("A1:A11").Select
Application.CutCopyMode = False
Selection.EntireRow.Insert , CopyOrigin:=xlFormatFromLeftOrAbove
ActiveCell.Offset(-1, 2).Range("A1:K1").Select
ActiveCell.Offset(1, -1).Range("A1").Select
Selection.PasteSpecial Paste:=xlPasteAll, Operation:=xlNone, SkipBlanks:= _
False, Transpose:=True
ActiveCell.Offset(11, -1).Range("A1").Select

If Len(ActiveCell.Text) < 2 Then GoTo stopp
ActiveCell.Offset(1, 0).Range("A1:x11").Select

Application.CutCopyMode = False
Selection.EntireRow.Insert , CopyOrigin:=xlFormatFromLeftOrAbove
ActiveCell.Offset(-1, 2).Range("A1:K1").Select
ActiveCell.Offset(1, -1).Range("A1").Select
Selection.PasteSpecial Paste:=xlPasteAll, Operation:=xlNone, SkipBlanks:= _
False, Transpose:=True
ActiveCell.Offset(11, -1).Range("A1").Select
GoTo repeattranspose


to use:
get data in text form with 1 year of 12 months + other stuff
Select data [ctrl]+a selects the lot
Copy data [ctrl+c
open blank sheet in the workbook containing the macro
paste the data at cell a1
You now have a single column of data one year per row. If your excel is set up differently it may convert the data to columns automatically, ifnot
select the first to last year:
click first - scroll to last and click the last year whilst holding shift
select [data] tab
click text to colums
click delimited if you KNOW that there is always a certain character (space, comma etc) between monthly data
or click fixed width
click next (select delimiter character if necessary)
check the colums are correctly selected – move, delete or add. If station number is in data this usually has the date attached without space. If so add a separator to separate the date from the station.
If station number is in first column click next and select station number to be text click finish
else click finish
You should now have the data separated into columns.

Click the cell containing the first date (or the first cell to the left of january’s temperature)
Save the workbook as the next step is not undo-able.
Run the macro above (use keyboard shortcut or go to the developer tab and double click the macro name.
The macro should stop on the last line of data (looks for empty cell). However if it does not press [ctrl]+[break] a number of times. select end or debug from the options according to taste.
No guarantee is given with this software!!!!!

If you ever end up with a correctly formatted column of monthly data you will need to remove all data error indicators.
Mark the data column
on [data] tab select filter
on first line of column click the down arrow
deselect all (click the ticked select all box)
look through offered numbers for error indicators "-" "-9999.9" etc.
click the boxes associated with the error indicators
only the data in error is now shown
mark it all (be careful of the first box as this is patially obscured by the arrow)and press [delete]
Turn off filter by clicking filter in the ribbon again
Data is now clean
I data shows temp*10
the adjacent (right) to the first temp type
= [click temperature to left to enter cell]/10
mark all column including this cell to last row containing temperature
from [home] tab click fill then select fill down
This column now contains correctly scaled temperature but the cell contents are actually formulae.
with column marked copy column [ctrl]+c
right click first cell of this column and select paste special
select values only then ok it
The column now contains actual values and can therfore be copied to another sheet with dates in decimal format i.e. year+(month-1)/12. Note that excel does not like true dates before jan 1st 1900



From wuwt
Willis Looking at the unadjusted plots leads me to suspect that there are 2 major changes in measurement methods/location.
This occur in january 1941 and June 1994 – The 1941 is well known (po to airport move) . I can find no connection for the 1994 shift
These plots show the 2 periods each giving a shift of 0.8C
The red line shows the effect of a suggested correction
This plot compares the GHCN corrected curve (green) to that suggested by me (red).
The difference between the 2 is approx 1C compared to the 2.5 you quote as the “cheat”.


Defunct code found in Stolen data

Robert Greiner you state (on wattsupwiththat):

Line 8
This is where the magic happens. Remember that array we have of valid temperature readings? And, remember that random array of numbers we have from line two? Well, in line 4, those two arrays are interpolated together.

The interpol() function will take each element in both arrays and “guess” at the points in between them to create a smoothing effect on the data. This technique is often used when dealing with natural data points, just not quite in this manner.

The main thing to realize here, is, that the interpol() function will cause the valid temperature readings (yrloc) to skew towards the valadj values.

Lets look at a bit more of that code:
; Apply a VERY ARTIFICAL correction for decline!!
2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,'Oooops!'

Does not this line give a yearly adjustment value interpolated from the 20 year points?

filter_cru,5.,/nan,tsin=yyy,tslow=tslow oplot,timey,tslow,thick=5,color=21
Does not this line plot data derived from yyy


The smoking gun line!!!!
Does not his line plot data derived from yyy+yearlyadj The FUDGED FIGURE

This is further backed up by the end of file:
;legend,['Northern Hemisphere April-September instrumental temperature',$
; 'Northern Hemisphere MXD',$
; 'Northern Hemisphere MXD corrected for decline'],$
; colors=[22,21,20],thick=[3,3,3],margin=0.6,spacing=1.5
legend,['Northern Hemisphere April-September instrumental temperature',$
'Northern Hemisphere MXD'],$

To me this looks as if 'Northern Hemisphere MXD corrected for decline' would have been printed in colour 20 - just the same as the smoking gun line. HOWEVER you will note that this section is commented out also.

This code was written in 1998. If it had been implemented in any document then there would have been no leaked emails about hiding the decline!

So in my view this is code left in after a quick look-see.

Remember engineers ans scientist are human and play if bored and do not always tidy up.
have a look at:

From wuwt and woodfortrees
Here’s Gavin of RC on the subject (which was quoted by “Norman” in comments on your previous posting):

“It was an artificial correction to check some calibration statistics to see whether they would vary if the divergence was an artifact of some extra anthropogenic impact. It has never been used in a published paper (though something similar was explained in detail in this draft paper by Osborn). It has nothing to do with any reconstruction used in the IPCC reports.”

And indeed, in the same set of comments, “Morgan” pointed out that the Osborn et al. paper explicitly describes this step:

“To overcome these problems, the decline is artificially removed from the calibrated tree-ring density series, for the purpose of making a final calibration. The removal is only temporary, because the final calibration is then applied to the unadjusted data set (i.e., without the decline artificially removed). Though this is rather an ad hoc approach, it does allow us to test the sensitivity of the calibration to time scale, and it also yields a reconstruction whose mean level is much less sensitive to the choice of calibration period.”

I’m not sure which one of these your particular code snippet is doing, but either seem perfectly reasonable explanations to me – and both require the code to be added and them removed again. The lazy programmer’s way of doing this is by commenting and uncommenting.

If some hacker accessed some code illegally which contains commented out sections:
1. you do not know the status of the code - development or an issue or final issued. How can you criticise it?
2. Presence of commented out code or separate programme (that this thread is about) does not prove intent to commit fraud. As someone else commented the presence of unwritten code written by the invisible pink unicorn that says invisibly "this code creates a hockey stick" will not stand up in a court of law. To use use the argument here that "it could have been used so it must show intent to commit fraud" is disengenious to say the least.

WUWT entry

Line 4:
; Reads Harry’s regional timeseries and outputs the 1600-1992 portion
Line 10:

2.6,2.6,2.6]*0.75 ; fudge factor


Notice that phrase "fudge factor" doesn't sound like hiding to me!

Lines 53-70

I feel that briffa_sep98_e.pro is the encoding of a lie.

did you notice this:

Next file calibrate_nhrecon
; Specify period over which to compute the regressions (stop in 1960 to avoid
; the decline that affects tree-ring density records)
next file recon_overpeck
; Specify period over which to compute the regressions (stop in 1940 to avoid
; the decline
(I think they mean 1960 !! to agree with the code that follows)
Next File recon_esper.pro
All the same comment added in the header

Hiding?? I do not think so

Seems to be the later version of your files:
ml=where(densadj eq -99.999,nmiss)

Note no yearlyadj no valadj
So which programme was used to publish??



CO2 the stuff of life

Lets look at 2 gases
a poison – Hydrogen Sulphide H2S
And a benificial to all life nutrient – CO2

10 ppm
Beginning of Eye Irritation
50-100 ppm
Slight conjunctivitis and respiratory tract irritation after one hour
100 ppm
Coughing, eye irritation, loss of sense of smell after 2-15 minutes. Altered respiration, pain the eyes, and drowsiness after 15-30 minutes followed by throat irritation after one hour. Several hours exposure results in gradual increase in severity of symptoms and death may occur within the next 48 hours.
200-300 ppm
Marked conjunctivitis and respiratory tract irritation after one hour exposure.
500-700 ppm
Loss of consciousness and possibly death in 30 minutes to one hour of exposure.
700-1000 ppm
Rapid unconsciousness, cessation of respiration, and death
1000-2000 ppm
Unconsciousness at once, with early cessation of respiration and death in a few minutes. Death may occur if individual is removed to fresh air at once.

The most dangerous aspect of hydrogen sulfide results from olfactory accomodation and/or olfactory paralysis. This means that the individual can accomodate to the odor and is not able to detect the presence of the chemical after a short period of time. Olfactory paralysis occurs in workers who are exposed to 150 ppm or greater. This occurs rapidly, leaving the worker defenseless. Unconsciousness and death has been recorded following prolonged exposure at 50 ppm.

There were 80 fatalities from hydrogen sulfide in 57 incidents, with 19 fatalities and 36 injuries among coworkers attempting to rescue fallen workers.

Carbon dioxide is an asphyxiant. It initially stimulates respiration and then causes respiratory depression.
High concentrations result in narcosis. Symptoms in humans are as follows:
Breathing rate increases slightly. 1% (10,000ppm)
Breathing rate increases to 50% above normal level. Prolonged
exposure can cause headache, tiredness.
Breathing increases to twice normal rate and becomes labored. Weak
narcotic effect. Impaired hearing, headache, increased blood pressure
and pulse rate.
Breathing increases to approximately four times normal rate, symptoms
of intoxication become evident, and slight choking may be felt.
4 – 5%
Characteristic sharp odor noticeable. Very labored breathing,
headache, visual impairment, and ringing in the ears. Judgment may be
impaired, followed within minutes by loss of consciousness.
5 – 10%
Unconsciousness occurs more rapidly above 10% level. Prolonged
exposure to high concentrations may eventually result in death from
10 – 100%

All true, but the subjective distress is almost entirely caused by
the high CO2. Humans don’t have good hypoxia sensors, and people have
walked into nitrogen filled rooms and died, before they even realized
there was anything wrong. You can breathe into a closed circuit which
takes out the CO2 until you pass out from hypoxia, without much
discomfort at all. On the other hand, in a submarine or someplace
where CO2 is building up but there’s plenty of oxygen, it’s intensely
uncomfortable, and feels like dying. So does breathing that 5% CO2 95%
O2 medical mix they treat CO victims with.

And when the CO2 hits about 7% to 10% of your ambient air, you DO
die. Even if the rest is O2. It’s CO2 narcosis, and it shuts you
down. 5% CO2 is about 40 Torr, your normal blood level. So if you
breath that, you go up to 80 Torr, enough to black you out unless you
hyperventilate. Double your minute volume and you can get down to 60
Torr, but you feel crumby. At 10% there’s no way to keep below about
90 Torr, and (unless you’re a chronic COPD patient who’s used to high
CO2s and has a high bicarb and other compensatory mechanisms) you black
out. Then quit hyperventilating. Then quit breathing entirely.

included to show that the combined effects of carbon dioxide and a shortage of oxygen are much more intense than either of the two conditions alone,

So firstly it is not benign above 50,000ppm
Secondly it is not poisonous but it kills:

deaths :
Look up “choke damp” in mines
look up lake nyos 2000 deaths / lake monoun 37 deaths

So please cut the stuff about how CO2 is the stuff of life


UK spaghetti temperatures

A spaghetti plot of uk temperatures. Data from met office:

An average of those stations


noise tree rings and stuff

But surely the random sequences added together are just that random. Because they are random there will be random sequences that conform to any curve required, but outside the conformance the sequence will fall back to random = average zero.

Surely what is being proposed is that trees growths are controlled by many factors. no randomness just noise and a combination of factors.
Trees will not grow at -40C trees will not grow at +100c.
Trees do grow well at a temp in between (all else being satisfactory).

Choosing trees that grow in tune to the temperature means that if they extend beyond the temp record than the is a greater possibility that these will continue to grow in tune with the temp. If they grow to a different tune then they are invalid responders.

A long time ago I posted a sequence of pictures showing what can be obtained by adding and averaging a sequence of aligned photos - the only visible data was the church and sky glow. I added 128 of these images together and obtained this photo:

Note that it also shows the imperfections in the digital sensor (the window frame effect)
Image shack did have a single image with the gamma turned up to reveal the only visible image (Church+sky) but they've lost it!

The picture was taken in near dark conditions.
A flash photo of the same:

By removing all invalid data (pictures of the wife, the kids, flowers etc) that do not have the church and sky, a reasonable picture of the back garden appears from the noise.
Of course I may have included a few dark picture with 2 streetlights in those locations, but with enough of the correct image these will have a lessening effect.

This cap shape must have a dependence on temperature. It may not be linear but it must be there.

Somewhere between 15C and 100C the growth must start declining Did trees pass the optimum in the 60s?

Uncontrolled emissions in the 60s, 70s and 80s was known to cause acid rain (to an extent that some countries were forced to add lime to lakes to prevent damage) there was plenty of evidence that trees were being damage also.

Is it not true to say Damaged trees=slow growth

There are many factors that can slow tree growth but apart from over temperature these effects will be diminished by limited industrialisation (before 1900?).

Trees are rubbish thermometers, but in all the noise there MUST be a temperature signal. A large local sample will lower the noise from sickness, or damage. A large global sample will lower the noise from changes in soil fertility, etc.

Nothing will remove the noise from CO2 fertilisation, or other global events.

Some trees growing at the limit of their water needs may be negatively affected by rises in temperatures from their minimum growing value - growing in heat requires more water. these will always show a negative growth increase with temp. But if averaged with enough positive responders then these will be insignificant.

But the signal that remains must, when averaged contain a temperature signal (not necessarily linear)

"Overall, the Program's cap and trade program has been successful in achieving its goals. Since the 1990s, SO2 emissions have dropped 40%, and according to the Pacific Research Institute, acid rain levels have dropped 65% since 1976.[16][17] However, this was significantly less successful than conventional regulation in the European Union, which saw a decrease of over 70% in SO2 emissions during the same time period.[18]
In 2007, total SO2 emissions were 8.9 million tons, achieving the program's long term goal ahead of the 2010 statutory deadline.[19]
The EPA estimates that by 2010, the overall costs of complying with the program for businesses and consumers will be $1 billion to $2 billion a year, only one fourth of what was originally predicted.[16]"

"However, the issue of acid rain first came to the attention of the international community in the late 1960s, having been identified in certain areas of southern Scandinavia, where it was damaging forests. The matter quickly became an international issue when it was discovered that the acid deposits in these areas were a result of heavy pollution in the UK and other parts of northern Europe.

Acid rain and air pollution emerged from the industrial boom of the early 1900s onwards and the increasing levels of chemical production associated with these processes. The building of taller industrial chimneys from the 1960s onwards was largely held to be responsible for pollutants generated in the UK blowing as far as Scandinavia. "

CO2 and IR absoption

October 16th, 2009 at 4:03 am
Re: thefordprefect (#186),
"From what I have seen the logarithmic effect is usually explained by the absoption bands getting full - ie. no more radiation can be absorbed. Radiation is then absorbed by smaller absorptions bands and and by the under used width of the CO2 bands.
And the CO2 GH effect is vastly lessened by many of the bands falling within the H20 bands."
I don't konw where you have seen that but this explanation is not even wrong .
It is absolutely and totally forbidden that a band , any band gets "full" or "saturated" .
The population that is in an excited state is a CONSTANT for a given temperature (f.ex the CO2 15µ excited state represents 5 % of the total CO2 population at room temperatures) . This is prescribed by the MB distribution of the quantum states .
It doesn't depend on the number of molecules , the intensity of radiation or the age of the captain . Only temperature .
So whatever amount of IR you throw at a CO2 population , they will absorb it all and REEMIT .
They can't do anything else because they must do whatever it takes to keep the percentage of excited states constant .
Imagine a dam (CO2 molecules) and a lake whose level (percentage of excited states) is exactly at the top of the dam .
If you increase the flow in the lake (absorbed radiation) all that will happen is that the flow over the top of the dam (emitted radiation) will increase by exactly the same amount .
If you increase the height of the dam (temperature) , the level of the lake will go in a transient untill it gets to the new top and then it's again exactly like described above .
There is no "saturation" .


Oceans as temperature controllers

Re: tallbloke (#41),
I believe this is due to the long solar minimum. When the sunspot count is above 40 or so, the oceans are net gainers of solar heat energy. When the sun is quiet for a while, that energy makes it's way back to the surface and is released. The last five solar minima have been followed within 12 months by an el nino.

Are you also posting as Stephen Wilde on the wuwt blog? You seem to be pushing the same ideas!.

SW radiation penetrates sea water further than LW radiation. However, this does not mean that SW radiation penetrates 20m of water suddenly transferring all its energy at that depth. It is progressively absorbed on the way down until at depth there is no more SW radiation left. So IR heats the surface only, UV heats the surface mainly. Air in contact with this sea surface is rapidly heated by the water and the water cools fractionally ONLY if the air temp is less than the water temp. If the water temp is less than the air temp (as it is during the daylight hours - usually) then the air will be cooled and the water warmed very fractionally.

The water temperature varies on a yearly basis round the UK (I assume it does round the rest of the globe?) There is no year long lag in temperature fluctuation as seasons change (perhaps only a month??)

My question to you is the same as it has been to Mr. Wilde - how is the ocean going to store this heat over many years as you suggest and then release it to the atmosphere?

Deep water more than 900m is at 4C 700 m averages 12C and at the surface 22C at an air temp of ????

If the heat is stored in the upper layers then it is continuously losing the "heat" to COOLER air
If it is in layers below 900m then how is 4C water going to up-well to release heat stored at 4C to air at 5C(for example)

Assuming it were possible to get heat energy stored at 4C to transfer the energy to the air at 12C how do you prevent these heat storage layer mixing as the sea slops around for 5 to 10 years?.

I would agree that the oceans act as a big temperature smoothing "capacitor" Reducing the yearly variations. Much more than this I need a better physical explanation for, please.

A further point AMO is often implicated in controlling air temperatures. This was posted on wuwt:
Comparing AMO with Hadcrut3V and Hadcrut3NH there is a wonderful correlation not so good with CET:

Apart from the increased trend caused by ?something? All the slow humps and dips appear in the right places and even the rapid changes appear aligned (to the eye!)

So if we zoom in and look at the signals through a much longer moving average the dips again align.

The dips in HADCRUT seem to occur a few months ahead of AMO and the peaks are a bit off. Not sure what CET has little correlation but hey, there must be a connection.
If Air Temp is driving AMO then one would expect the air temp changes to occur before AMO
Vice Versa.

So now lets look at the same date range through shorter moving averages.

Now it becomes interesting. sometimes the air temp leads amo and sometimes amo leads air temp.

If amo drives temp then there is no way that amo can lag air temperature.
vice versa

To me this says that there is a external driver, or the data is faulty.

Any thoughts?

Some interseting stuff but not too useful:

interesting book (full)

This is the one for wavelength and penetration depth in ocean:

IR whacks the water molecules into motion UV less so - check the absoption bands of water vapour.


McIntyre refuses offer to do real science

from dot earth
October 5, 2009, 2:41 pm Climate Auditor Challenged to Do Climate Science
By Andrew C. Revkin
Bloggers skeptical of global warming’s causes* and commentators fighting restrictions on greenhouse gases have made much in recent days of a string of posts on Climateaudit.org, one of the most popular Web sites aiming to challenge the deep consensus among climatologists that humans are setting the stage for generations of disrupted climate and rising seas. In the posts, Stephen McIntyre, questions sets of tree-ring data used in, or excluded from, prominent studies concluding that recent warming is unusual even when compared with past warm periods in the last several millenniums (including the recent Kaufman et al. paper discussed here).

Mr. McIntyre has gained fame or notoriety, depending on whom you consult, for seeking weaknesses in NASA temperature data and efforts to assemble a climate record from indirect evidence like variations in tree rings. Last week the scientists who run Realclimate.org, several of whom are authors of papers dissected by Mr. McIntyre, fired back. The Capital Weather Gang blog has just posted its analysis of the fight. One author of an underlying analysis of tree rings Keith Briffa, responded on his Web site and at on Climateaudit.org.

What is novel about all of this is how the blog discussions have sidestepped the traditional process of peer review and publication, then review and publication of critiques, and counter-critiques, by which science normally does that herky-jerky thing called knowledge building. The result is quick fodder for those using the Instanet to reinforce intellectual silos of one kind or another.

I explored this shift in the discourse in some e-mail exchanges with Mr. McIntyre and some of his critics, including Thomas Crowley, a University of Edinburgh specialist in unraveling past climate patterns. Dr. Crowley and Mr. McIntyre went toe to toe from 2003 through 2005 over data and interpretations. I then forwarded to Mr. McIntyre what amounted to a challenge from Dr. Crowley:

Thomas Crowley (now in Edinburgh) has sent me a note essentially challenging you to develop your own time series [of past climate patterns] (kind of a “put up or shut up” challenge). Why not do some climate science and get it published in the literature rather than poking at studies online, having the blogosphere amplify or distort your findings in a kind of short circuit that may not help push forward understanding?

As [Dr. Crowley] puts it: “McIntyre is really tiresome - notice he never publishes an alternate reconstruction that he thinks is better, oh no, because that involves taking a risk of him being criticized. He just nitpicks others. I don’t know of anyone else in science who actually does such things but fails to do something constructive himself.”

Here’s Mr. McIntyre’s reply (to follow references to publications you’ll need to refer to the linked papers). In essence, he says he sees no use in trying his own temperature reconstruction given the questions about the various data sets one would need to utilize:
The idea that I’m afraid of “taking a risk” or “taking a risk of being criticized” is a very strange characterization of what I do. Merely venturing into this field by confronting the most prominent authors at my age and stage of life was a far riskier enterprise than Crowley gives credit for. And as for “taking a risk of being criticized”? Can you honestly think of anyone in this field who is subjected to more criticism than I am? Or someone who has more eyes on their work looking for some fatal error?

The underlying problem with trying to make reconstructions with finite confidence intervals from the present roster of proxies is the inconsistency of the “proxies,” a point noted in McIntyre and McKitrick (PNAS 2009) in connection with Mann et al 2008 (but applies to other studies as well) as follows:

Paleoclimate reconstructions are an application of multivariate calibration, which provides a theoretical basis for confidence interval calculation (e.g., refs. 2 and 3). Inconsistency among proxies sharply inflates confidence intervals (3). Applying the inconsistency test of ref. 3 to Mann et al. A.D. 1000 proxy data shows that finite confidence intervals cannot be defined before ~1800.

Until this problem is resolved, I don’t see what purpose is served by proposing another reconstruction.

Crowley interprets the inconsistency as evidence of past “regional” climate, but offers no support for this interpretation other than the inconsistency itself –- which could equally be due to the “proxies” not being temperature proxies. There are fundamental inconsistencies at the regional level as well, including key locations of California (bristlecones) and Siberia (Yamal), where other evidence is contradictory t.o Mann-Briffa approachs (e.g. Millar et al 2006 re California; Naurzbaev et al 2004 and Polar Urals re Siberia,) These were noted up in the N.A.S. panel report, but Briffa refused to include the references in I.P.C.C. AR4. Without such detailed regional reconciliations, it cannot be concluded that inconsistency is evidence of “regional” climate as opposed to inherent defects in the “proxies” themselves.

The fundamental requirement in this field is not the need for a fancier multivariate method to extract a “faint signal” from noise – such efforts are all too often plagued with unawareness of data mining and data snooping. These problems are all too common in this field (e.g. the repetitive use of the bristlecones and Yamal series). I think that I’ve made climate scientists far more aware of these and other statistical problems than previously, whether they are willing to acknowledge this in public or not, and that this is highly “constructive” for the field.

As I mentioned to you, at least some prominent scientists in the field accept (though not for public attribution) the validity of our criticisms of the Mann-Briffa style reconstruction and now view such efforts as a dead end until better quality data is developed. If this view is correct, and I believe it is, then criticizing oversold reconstructions is surely “constructive” as it forces people to face up to the need for such better data.

Estimates provided to me (again without the scientists being prepared to do so in public) were that the development of such data may take 10-20 years and may involve totally different proxies than the ones presently in use. If I were to speculate on what sort of proxies had a chance of succeeding, it would be ones that were based on isotope fractionation or other physical processes with a known monotonic relationship to temperature and away from things like tree ring widths and varve thicknesses. In “deep time,” ice core O18 and foraminifera Mg/Ca in ocean sediments are examples of proxies that provide consistent or at least relatively consistent information. The prominent oceanographer Lowell Stott asked to meet with me at AGU 2007 to discuss long tree ring chronologies for O18 sampling. I sent all the Almagre cores to Lowell Stott’s lab, where Max Berkelhammer is analyzing delO18 values.

Underlying my articles and commentary is the effort to frame reconstructions in a broader statistical framework (multivariate calibration) where there is available theory, a project that seems to be ignored both by applied statisticians and climate scientists. At a 2007 conference of the American Statistical Association to which Caspar Ammann (but not me) was invited, it was concluded:

While there is undoubtedly scope for statisticians to play a larger role in paleoclimate research, the large investment of time needed to become familiar with the scientific background is likely to deter most statisticians from entering this field. http://www.climateaudit.org/?p=2280

I’ve been working on this from time to time over the past few years and this too seems “highly constructive” to me and far more relevant to my interests and skills than adding to the population of poorly constrained “reconstructions,” as Crowley proposes.

In the meantime, studies using recycled proxies and problematic statistical methods continue to be widely publicized. Given my present familiarity with the methods and proxies used in the field, I believe that there is a useful role for timely analysis of the type that I do at Climate Audit. It would be even more constructive if the authors rose to the challenge of defending their studies.

Given the importance of climate change as an issue, it remains disappointing that prompt archiving of data remains an issue with many authors and that funding agencies and journals are not more effective in enforcing existing policies or establishing such policies if existing policies are insufficient. It would be desirable as well if journals publishing statistical paleoclimate articles followed econometric journal practices by requiring the archiving of working code as a condition of review. While progress has been slow, I think that my efforts on these fronts, both data and code, have been constructive. It is disappointing that Crowley likens the archiving of data to doing a tax return. It’s not that hard. Even in blog posts (e.g. the Briffa post in question), I frequently provide turnkey code enabling readers to download all relevant data from original sources and to see all statistical calculations and figures for themselves. This is the way that things are going to go – not Crowley’s way.

So should this all play out within the journals, or is there merit to arguments of those contending that the process of peer review is too often biased to favor the status quo and, when involving matters of statistics, sometimes not involving the right reviewers?

Another scientist at the heart of the temperature-reconstruction effort, Michael Mann of Pennsylvania State University, said that if Mr. McIntyre wants to be taken seriously he has to move more from blogging to publishing in the refereed literature.

“Skepticism is essential for the functioning of science,” Dr. Mann said. “It yields an erratic path towards eventual truth. But legitimate scientific skepticism is exercised through formal scientific circles, in particular the peer review process.” He added: “Those such as McIntyre who operate almost entirely outside of this system are not to be trusted.”


Grape harvest

Nothing seems to give a useful proxy to temperature. Some of the better ones are grape harvest and budbust dates. But these only go back to about 1300s.

Note that grape harvest has not been converted to temp. so high temp = early harvest!

Statisticians and the real world

What a strange world we live in.

We have McIntyre and acolytes saying in one breath -
1. It is not valid to sort samples before you analyse them. Briffa should have used all samples from the area, and then come to a conclusion.
2. Then they say The 10/12 Briffa trees should not have been included as we have these 34 schweingruber(?) trees, and look no 20thC warming.
3. Someone then says that the Briffa trees should have been included
4. McIntyre adds them and finds a smaller hockey stick.
5. McIntyre analyses the Briffa trees and find a golden hockey stick tree which provides most of the late 20thC warming.
6. McIntyre then says this result is 8 sigma outside the normal and should not be included.
How do you reconcile statement 1 with statement 6??????
In my view if you are not allowed to sort for correlation between ring width and temperature over the period where we have instrumental records. Then you are not allowed to sort at all.

Consider this scenario

At a junk sale you purchase a number of instruments various environmental parameters over time. None are very accurate, and you have no idea which parameter they are measuring. You want to record temperature, so you set them up in the same location. Some years later you can afford a calibrated temperature recorder which you also set up in the same location.
Some of these instruments will have recorded sunlight, precipitation, soil nutrient levels, fungal spore levels, ambient temperature, and temperature of the soil 1 metre down.
If you want to know what the temperature was when you set up the first instruments do you
a. normalise all readings of all instruments then average them.
b. average them all without normalising
c. compare the outputs from all instruments with the calibrated temperature recorder and throw out all that show no correlation. normalise the results remaining and then average them
d. as c. but additionally throw out units deviating by significant amounts from the average.

Which of a. b. c. is going to give you best historic temperatures?

Personally as an engineer not a statistician I would go with d. or if insufficient instruments to find the outliers c.
I realise that this is going to bias the results to giving the same result as the calibrated instrument, but may I suggest this is exactly what you want.

It seems a statistician would go for 1 as this would not bias the result to valid temperatures. I just cannot understand this.


dendro matching

September 29th, 2009 at 7:55 pm
Can I see if I have this correct?

1. We have a number of tree ring samples from different areas
2. Briffa has selected only rings that match valid local temperature records.
3. Briffa assumes that these trees stay in sync with temperature to an early time in their life.
4. Briffa matches older dead(?) trees to those that match local temperatures and says that these must therefore also be in sync with local temp.
5. repeat 4. and until tree rings are available at the required earliest age.
6. Statistics does not allow matching of tree ring temperature proxy to real temperature because this is cherry picking and will always produce a hockey stick

I agree that extending backwards from multiple overlapped records must produce greater deviations from reality.
I agree that a single tree ring record can deviate once away from the matched record.

However, thinking as an engineer trying to find an accurate reading of for example a time series of a voltage supply to a building monitored with inaccurate chart recorders over various lengths of cable (= added noise). if one recorder is known to have been calibrated (to national standards) over recent part of that record, then I would look at the other recorders over this calibrated period and throw out all the outlier readings (they are wrong now, and I do not know if they were ever correct so there is no reason to include them in my determination. Some of the more accurate recorders may have read high before the calibrated range and some may have read low. Some will be recording significant noise compared to the majority and so these could be ignored if sufficient others remain to determine this fact. I would then average the remainder and suggest that this average record is the most likely record of voltage.
As a statistician are you suggesting that all recorder outputs should be averaged including those reading zero and those reading full scale and those whose readings deviate grossly from the mean.
This seems wrong and certainly will give a invalid result. Am I wrong?

But then we need to look at the tree sampling.
Were the trees sampled totally at random - trees in water, trees in bogs, trees scraping by on a solid rock, trees near to death, young trees etc?
Were they at the tree line or sea level?
Were they all the same "make"?

I would suggest that the actual sampling was not random. Altitude, health, species, etc. are all non randomly chosen (cherry picked)

If this is the case what is the point suggesting that they should not be further chosen to best represent temperature? what would be the point for example choosing a tree that fell over during its life but continued to grow with diminished root function? What would be the point in chosing a tree with growth limited by water/nutrients. What would be the point in including a tree 100skm further north than the rest? Would your statistical methods require that these be included in the sequence?

Steve McIntyre:
September 29th, 2009 at 9:28 pm
Re: thefordprefect (#244),

Some of your premises are not yet demonstrated. There are a couple of different levels of consolidation: at a "site", multiple cores are taken, usually within fairly close proximity to one another. These are composited into a "site chronology". Briffa unusually composites samples from areas not at all close to one another in Avam-Taimyr and Tornetrask-Finland. At Yamal, for some reason, he has not composited samples from Polar Urals, which is closer to Yamal than Avam is to Taimyr.

2. Briffa has selected only rings that match valid local temperature records.
[There are two issues: selection of trees at a site e.g. Yamal and selection of sites, e.g. Avam and a nearby Schweingruber site into Taimyr. The procedures are not described. It is not known how Yamal core selection decisions were made and which were made before CRU and which at CRU].

3. Briffa assumes that these trees stay in sync with temperature to an early time in their life.
[Not necessarily. WE don't know what was done. It is possible that trees with elevated growth rates were preferentially selected, but we don't know that for sure.]

4. Briffa matches older dead(?) trees to those that match local temperatures and says that these must therefore also be in sync with local temp.
[ no. crossdating is done by pattern matching.]

5. repeat 4. and until tree rings are available at the required earliest age.
6. Statistics does not allow matching of tree ring temperature proxy to real temperature because this is cherry picking and will always produce a hockey stick.
[this is a different issue.]

FP, I don't have time to provide personal education to everyone trying to get up to speed. I've asked people who don't have specialist viewpoints to comment on Unthreaded.


Climate primer refs


interesting links

effect of snow and grass on cooling
effect of cloud and wind on cooling
tree rings briffa
tree rinkg yamal
Here’s another interesting dissertation with descriptions of the Yamal trees and environment:

Good descriptions of the environment and the trees sampled.

Unfortunately (???) he ends up with yet another hockey stick:
See figure 18
Figure 18- of change in the mean temperature of summer (deviations from the average), smoothed by 50-year filter, and the dynamics of polar timber line


How HADCRUT temp adjustments are made


water vapour and GHGs

anna v (20:43:39) :
CO2 is a trace greenhouse element . period. the tail does not wag the dog.

BUT consider ozone. Even less than a trace gas. But if it were reduced by a few percent globally UVb would be playing hell with our genes, plant growth, plankton survivability. 1% change in O3==2% increase in UV.
From Wiki O3 gives 3-7% of the GHG effect from only 0.00006 percent of the atmosphere (0.6ppm)
Is this not a case of the hair on the flea in the tail wagging the dog? CO2 could be similar?

Stephen Wilde (23:50:41) :
The composition of the air is merely an enabler. Once it has served it’s function in permitting the creation of liquid oceans and a hydrological cycle it’s significance becomes marginal.

You are making some really wild leaps here.
As you said energy balance is everything. The earth is a "grey" body radiator and if the atmosphere were suddenly lost radiation away would be determined by the various emissivities of the different terrain. And temperature will stabilise when grey body radiation out = radiation in. This would bottom out at about -18C.

Adding atmosphere makes it more complex so lets assume all GHGs are removed but the same atmospheric pressure were present (O2, N2, H2 are non GHGs). Without GHGs thermal radiation from the grey body will not be absorbed so only conduction and thence convection will heat the air. With no absorption by GHGs the grey body + atmosphere will still loose the same amount of radiation as if the atmosphere were not present (possibly more as the conducted heat to air will also radiate from the air).
So without any GHGs but with an amosphere the earth will radiate as before - as a grey body.

Major GHG effect (wiki)
water vapor, which contributes 36–72%
carbon dioxide, which contributes 9–26%
methane, which contributes 4–9%
ozone, which contributes 3–7%
Water vapor accounts for the largest percentage of the greenhouse effect, between 36% and 66% for water vapor alone, and between 66% and 85% when factoring in clouds.[8] ...
The Clausius-Clapeyron relation establishes that air can hold more water vapor per unit volume when it warms.

If we now add only normal concentration of water vapour to an earth without other GHGs assuming it starts at the average of 13-15degC However this is with a full compliment of GHGs. Water vapour only provides 72-85% of the GHG effect so the temperature will fall. Falling temperature will reduce the water vapour content of air which will lower the GHG effect and the temperature will fall. Falling temperature will reduce the water vapour content of air which will lower the GHG effect an the temperature will fall etc. - a positive feedback that will eventually lead to very low levels of water vapour and hence we will be back at near grey body temperature.

If we now add the other GHGs into the atmosphere then the grey body radiation is reduced and temperature rises. Rising temperature leads to more water vapour which leads to higher temperature etc. We then end up with a warming grey body. At some point the grey body radiation out will equal the incoming radiation and we have temperature "stasis".

water in the atmosphere


Ozone Hole

Reference stuff


After the Antarctic Ozone Hole was discovered, some scientists took the view that it might be a natural event caused by volcanic chlorine emissions from Mount Erebus rather than manufactured chlorinated chemicals. Eventually, however, Mount Erebus was exonerated (Zredna-Gostynska et al., 1993). Most of the chlorine Mount Erebus throws up takes the form of hydrogen chloride (HCl), which (like other chlorine from natural sources) readily dissolves in the water vapour of the lower atmosphere well before it can reach the stratosphere.
For Mount Erebus to affect the ozone layer, the volcano would have to inject a large proportion of its hydrogen chloride directly into the stratosphere, above a height of about 10 km. Mount Erebus has been active since it was first observed by James Ross in 1840, but appears never to have erupted with the force necessary to send chlorine directly into the stratosphere. The mountain itself is almost 4,000 m high (3,794 m), but the volcanic plume seldom rises above 5,000 m. The amount of gas Mount Erebus emits also bears no relation to the size of the ozone hole. In the summer of 1983, chlorine emissions from Mount Erebus were about 170 tonnes a day. In the following seven summers, when ozone depletion was even more severe, the chlorine emissions ranged from one-tenth to one-quarter of the 1983 figure (Zreda-Gostynska et al., 1993).


Greenland and North Atlantic climatic conditions


Jørgen Peder Steffensen stuff

Of interest:

Some interestinfg stuff from Jørgen Peder Steffensen
and links on left
Climate change is man-made shows arctic research
New research shows that the temperature of the arctic region fell steadily from over 2000 years ago all the way up to a 100 years ago. The cooling was caused by less solar radiation during the summer and the cold temperatures would have continued undisturbed. But around the year 1900 there occured a dramatic increase in temperature and the new research results therefore provide further evidence of man’s influence on the climate. The results are published in the scientific journal, Science.

NPI stuff

Unprecedented low twentieth century winter sea ice extent in the Western Nordic Seas since A.D. 1200

Abstract We reconstructed decadal to centennial variability of maximum sea ice extent in the Western Nordic Seas for A.D. 1200–1997 using a combination of a regional tree-ring chronology from the timberline area in Fennoscandia and δ18O from the Lomonosovfonna ice core in Svalbard. The reconstruction successfully explained 59% of the variance in sea ice extent based on the calibration period 1864–1997. The significance of the reconstruction statistics (reduction of error, coefficient of efficiency) is computed for the first time against a realistic noise background. The twentieth century sustained the lowest sea ice extent values since A.D. 1200: low sea ice extent also occurred before (mid-seventeenth and mid-eighteenth centuries, early fifteenth and late thirteenth centuries), but these periods were in no case as persistent as in the twentieth century. Largest sea ice extent values occurred from the seventeenth to the nineteenth centuries, during the Little Ice Age (LIA), with relatively smaller sea ice-covered area during the sixteenth century. Moderate sea ice extent occurred during thirteenth–fifteenth centuries. Reconstructed sea ice extent variability is dominated by decadal oscillations, frequently associated with decadal components of the North Atlantic Oscillation/Arctic Oscillation (NAO/AO), and multi-decadal lower frequency oscillations operating at ~50–120 year. Sea ice extent and NAO showed a non-stationary relationship during the observational period. The present low sea ice extent is unique over the last 800 years, and results from a decline started in late-nineteenth century after the LIA.


Short vs long term temperature events

from WUWT
anna v (00:36:51) :
Proof is in the pudding: the existing CO2 ,though growing ,has not managed to stop a cooling PDO and it will stop the ice age
IFF radiation input/output to the earth is BALANCED (in=out) global warming/cooling will not happen. There will be weather, seasons, PDOs etc. but IFF in=out the averaged temperature over decades will be constant. Short term Temperatures will fluctuate!!!!!!!
If in is not equal to out then temperatures averaged over decades will show a rise/fall. If in-out difference is small then average temperature change will be small compared to weather, seasons, PDO etc. BUT there is still a trend up or down which will be obvious when temperatures are averaged over long enough periods. This is where we are. Weather, seasons, PDOs happen giving wandering temperature but do not negate small continuous changes to the in/out balance.
The flip from ice age to temperate will be caused by a long term in/out balance change not by weather, seasons, PDO or other transient events.

Sad reflection on a scientist (Plimmer)

masonmart (23:49:30) :
Monbiot refused to debate the issues with Plimer (as all AGW proponents refuse debate with knowledgeable skeptics) and I, who know nothing, would gladly debate Climate change with Monbiot.

Monbiot has stated that he will not debate unless written answers are provided to his questions. These answers have not been provided therefore no debate.

As you have read Plimers book from which all answers may be obtained (according to Plimer) Perhaps you could answer both Plimer and Monbiots questions here?
Q to Plimer
1. The first graph in your book (Figure 1, page 11). How do you explain the discrepancy between the HadCRUT3 figure and your claim?
2. Figure 3 (page 25) is a graph purporting to show that most of the warming in the 20th Century took place before 1945 closely resembles the global temperature graph in the first edition of Martin Durkin’s film The Great Global Warming Swindle - since retracted as false. What is the source for the graph you used?
3. You maintain that “the last two years of global cooling have erased nearly thirty years of temperature increase.” (page 25)
a. Please give the source for your claim.
b. How do you reconcile it with the published data?
In your discussion of global temperature trends, you maintain that “NASA now states that […] the warmest year was 1934.” (p99)
a. Are you aware that this applies only to the United States?
b. Was this a mistake or did you deliberately confuse these two datasets?
5. Discussing climate trends in the Arctic, you state that “the sea ice has expanded” (p198). Again, you give no reference.
a. Please give a source for this claim.
b. How do you explain the discrepancy between this claim and the published data? http://nsidc.org/arcticseaicenews/
6. You state that “If the current atmospheric CO2 content of 380 ppmv were doubled to 760 ppmv […] [a]n increase of 0.5C is likely” (p366). Again you give no source. Please provide a reference for this claim.
7. You claim that “About 98% of the greenhouse effect in the atmosphere is due to water vapour.” (p370). Ian Enting says “In some cases the numbers given by Plimer are exaggerated to such an extent as to imply that without water vapour, Earth’s temperature would be below absolute zero - a physical impossibility.”
a. Please provide a reference for your claim about water vapour.
b. Please explain how your two statements (98% of the greenhouse effect is caused by water vapour and 18C can be attributed to CO2) can both be true
8. You cite a paper by Charles F Keller as the source of your claim that “satellites and radiosondes show that there is no global warming.” (p382)
a. How did you manage to reverse the findings of this paper?
b. Was it a mistake or was it deliberate misrepresentation?
9. You state “The Hadley Centre in the UK has shown that warming stopped in 1998″ (p391). Again you produce no reference.
a. Please give a reference for your claim.
b. How do you explain the discrepancy between your account of what the Hadley Centre says and theirs?
10. You state that “Volcanoes produce more CO2 than the world’s cars and industries combined.” (p413)
a. Please provide a reference for your claim.
b. How do you explain the discrepancy between this claim and the published data?
11. You maintain that “termite methane emissions are 20 times potent than human CO2 emissions”. (p472) Please provide a source for this claim.

Plimer to Monbiot
1. From the distribution of the vines, olives, citrus and grain crops in Europe, UK and Greenland, calculate the temperature in the Roman and Medieval Warmings and the required atmospheric CO2 content at sea level to drive such warmings. What are the errors in your calculation? Reconcile your calculations with at least five atmospheric CO2 proxies. Show all calculations and justify all assumptions.

2. Tabulate the CO2 exhalation rates over the last 15,000 years from (i) terrestrial and submarine volcanism (including maars, gas vents, geysers and springs) and calc-silicate mineral formation, and (ii) CH4 oxidation to CO2 derived from CH4 exhalation by terrestrial and submarine volcanism, natural hydrocarbon leakage from sediments and sedimentary rocks, methane hydrates, soils, microbiological decay of plant material, arthropods, ruminants and terrestrial methanogenic bacteria to a depth of 4 km. From these data, what is the C12, C13 and C14 content of atmospheric CO2 each thousand years over the last 15,000 years and what are the resultant atmospheric CO2 residence times? All assumptions need to be documented and justified.

3. From first principles, calculate the effects on atmospheric temperature at sea level by changes in cloudiness of 0.5%, 1% and 2% at 0%, 20%, 40%, 60% and 80% humidity. What changes in cloudiness would have been necessary to drive the Roman Warming, Dark Ages, Medieval Warming and Little Ice Age? Show all calculations and justify all assumptions.

4. Calculate the changes in atmospheric C12 and C13 content of CO2 and CH4 from crack-seal deformation. What is the influence of this source of gases on atmospheric CO2 residence time since 1850? Validate assumptions and show all calculations.

5. From CO2 proxies, carbonate rock and mineral volumes and stable isotopes, calculate the CO2 forcing of temperature in the Huronian, Neoproterozoic, Ordovician, Permo-Carboniferous and Jurassic ice ages. Why is the “faint Sun paradox” inapplicable to the Phanerozoic ice ages in the light of your calculations? All assumptions must be validated and calculations and sources of information must be shown.

6. From ocean current velocity, palaeotemperature and atmosphere measurements of ice cores and stable and radiogenic isotopes of seawater, atmospheric CO2 and fluid inclusions in ice and using atmospheric CO2 residence times of 4, 12, 50 and 400 years, numerically demonstrate that the modern increase in atmospheric CO2 could not derive from the Medieval Warming.

7. Calculate the changes in the atmospheric transmissivity of radiant energy over the last 2,000 years derived from a variable ingress of stellar, meteoritic and cometary dust, terrestrial dust, terrestrial volcanic aerosols and industrial aerosols. How can your calculations show whether atmospheric temperature changes are related to aerosols? All assumptions must be justified and calculations and sources of information must be shown.

8. Calculate 10 Ma time flitches using W/R ratios of 10, 100 and 500 for the heat addition to the oceans, oceanic pH changes and CO2 additions to bottom waters by alteration of sea floor rocks to greenschist and amphibolite facies assemblages, the cooling of new submarine volcanic rocks (including MORBs) and the heat, CO2 and CH4 additions from springs and gas vents since the opening of the Atlantic Ocean. From your calculations, relate the heat balance to global climate over these 10 Ma flitches. What are the errors in your calculations? Show all calculations and discuss the validity of any assumptions made.

9. Calculate the rate of isostatic sinking of the Pacific Ocean floor resulting from post LGM loading by water, the rate of compensatory land level rise, the rate of gravitationally-induced sea level rise and sea level changes from morphological changes to the ocean floor. Numerically reconcile your answer with the post LGM sea level rise, oceanic thermal expansion and coral atoll drilling in the South Pacific Ocean. What are the relative proportions of sea level change derived from your calculations?

10. From atmospheric CO2 measurements, stable isotopes, radiogenic Kr and hemispheric transport of volcanic aerosols, calculate the rate of mixing of CO2 between the hemispheres of planet Earth and reconcile this mixing with CO2 solubility, CO2 chemical kinetic data, CO2 stable and cosmogenic isotopes, the natural sequestration rates of CO2 from the atmosphere into plankton, oceans, carbonate sediments and cements, hydrothermal alteration, soils, bacteria and plants for each continent and ocean. All assumptions must be justified and calculations and sources of information must be shown. Calculations may need to be corrected for differences in 12CO2, 13CO2 and 14CO2 kinetic adsorption and/or molecular variations in oceanic dissolution rates.

11. Calculate from first principles the variability of climate, the warming and cooling rates and global sea level changes from the Bölling to the present and compare and contrast the variability, maximum warming and maximum sea level change rates over this time period to that from 1850 to the present. Using your calculations, how can natural and human-induced changes be differentiated? All assumptions must be justified and calculations and sources of information must be shown.

12. Calculate the volume of particulate and sulphurous aerosols and CO2 and CH4 coeval with the last three major mass extinctions of life. Use the figures derived from these calculations to numerically demonstrate the effects of terrestrial, deep submarine, hot spot and mid ocean ridge volcanism on planktonic and terrestrial life on Earth. What are the errors in your calculations?

13. From the annual average burning of hydrocarbons, lignite, bituminous coal and natural and coal gas, smelting, production of cement, cropping, irrigation and deforestation, use the 25µm, 7µm and 2.5µm wavelengths to calculate the effect that gaseous, liquid and solid H2O have on atmospheric temperature at sea level and at 5 km altitude at latitudes of 20º, 40º, 60º and 80ºS. How does the effect of H2O compare with the effect of CO2 derived from the same sources? All assumptions must be justified and calculations and sources of information must be shown.

Remember Plimer says his questions' answers can be found in his books.

So we have Monbiot mainly requesting sources of Plimer's eronious statements (surely a simple request to answer?) and Plimer requesting a journalist to provide original scientific research and to derive models from first principles. As Monbiot admits - he is not a research scientist so the questions are outside his knowledge.
Some of Monbiots questions (have now been answered on realclimate)

Oh dear!!!!!!!!!!!
Rabett Run


Arctic Temps posting on WUWT

Arctic Temperatures – What Hockey Stick?

Circling the Arctic

What sudden recent warming? What Hockey Stick? I don’t see any.

By Lucy Skywalker Green World Trust


Not impressed with the title here.
I have now gone through the GISS data (homogenised) and differenced the monthly figures of each station then averaged over the locations in the above map. It may not be classical hockey stick but its very close:
Over 2 degC difference between 1882 and 2008
A steady rise from 1966 onwards rising to greater than 0.5C higer than 1936 temp


Fourier Series simulation of temp record

Fourier Series simulation of temp record

Here's a thing I've been toying with for a few days. You take the temperature record - hadcrut3v global. Use narrow band bandpass filters on it to find peak frequencies/phase/amplitude. Add the output of the filters ansd see what you get.
By taking mid band frequency amplitude and phase one should then be able to reconstruct the temperature record from a series of sine (cosine) waves - this is shown in the second plot.

1/Frequency bands from .5 years to 1000 years were searched but the longest period that produced output was 150years.

The green line in these plots is the output of the 36 bands/cosines and as can be seen there is no trend. There is something else pushing the trend upwards.

One bodge has been to increase the amplitude of all bands by 2.3 to give the same sort of variability in the temp record.

A second bodge is to add a trend line that forces the synthesised temperature to conform to the hadcrut3v. As can be seen both the cosine and filter outputs follow the curve "rather well". Of particular note is the lack of temperature increase followed by fall over the last few years and 2 peaks at 1877 and 1998 being modelled with a "good" correlation.

There is a rapid warming between 1930 and 1945 which is not followed by the synthesised data.

This plot shows the relative amplitudes and periods of the synthesised waves

The synthesis of course means that individual periods can be removed to see the effect of tsi etc.

This is a work in progress but unless someone shows it to be a pile of poo I think it is interesting!


Icecore data CO2 CH4 Dust Temperature

I recently posted this on CA

The plots are from EPICA data, mainly not VOSTOK
The time scales are reversed (years before present)
The plots show temperature rise pretty much co-incident with CO2 rise. (samples are often too widly separated to safely say which came first!

Does temperature precede GHG change?
Some plots
CO2 in most cases rises at the same time as temperature. CH4 seems to terminate the warm period in many cases. Data is from EPICA core as this is more detailed than vostok. BUT core dates can still be spaced at over 2k years per sample in some periods. N2O and O3 have not been plotted.
Where is the data that shows temperature rise precedes CO2?
0 to 40,000 years. GISP2 and EPICA temperatures plotted on this graph. Co2 steady rise is simultaneous with temperature @17500ybp
note that only greenland gisp2 temperature shows a definite younger dryas – the antarctic EPICA data shows a flattening only.The EPICA CH4 data shows a misplaced drop around the younger dryas. Note the dust levels during the low temperature portion.

40k to 100k years Note the dust levels are non zero during this period and high during the low temperature portion.

100k to 200k years Co2 rises simulaneously with temperature @136kybp. Note the dust levels are high during the low temperature portion. CH4 termination of warm period

180k to 260k years Co2 rises simulaneously with temperature @252kybp. the 220kybp is less defined. Note the dust levels are high during the low temperature portion. CH4 and CO2 termination of warm periods
280k to 360k years Co2 rises simulaneously with temperature @341kybp. Note the dust levels are high during the low temperature portion. CH4 termination of warm periods

360k to 460k years Co2 rises simulaneously with temperature @432kybp. Note the dust levels are high during the low temperature portion. CH4 termination of warm period

460k to 560k years Co2 rises simulaneously with temperature @532kybp. Note the dust levels are high during the low temperature portion. CH4 termination of warm period

560k to 650k years Co2 rises simulaneously with temperature @629.5kybp. Note the dust levels are high during the low temperature portion. CO2,CH4 termination of warm period

650k to 760k years Co2 rises simulaneously with temperature @740.5kybp. Note the dust levels are high during the low temperature portion. CO2,CH4 termination of warm period @694kybp. Note dip at 722kybp has no CH4/Co2 driving. It is possible that dust level rises at this time but granularity of dust data is not sufficiently small to line up.

750k to 800k years Co2 rises simulaneously with temperature @796kybp. Note the dust levels are high during the low temperature portion. CO2 termination of warm period


Methane data is from:

Loulergue, L., et al.. 2008.
EPICA Dome C Ice Core 800KYr Methane Data.
IGBP PAGES/World Data Center for Paleoclimatology
Data Contribution Series # 2008-054.
NOAA/NCDC Paleoclimatology Program, Boulder CO, USA.

Age scale is gas age

CO2 data is from
0-22 kyr BP: Dome C (Monnin et al. 2001) measured at University of Bern
22-393 kyr BP: Vostok (Petit et al. 1999; Pepin et al. 2001; Raynaud et al. 2005) measured at LGGE in Grenoble
393-416 kyr BP: Dome C (Siegenthaler et al. 2005) measured at LGGE in Grenoble
416-664 kyr BP: Dome C (Siegenthaler et al. 2005) measured at University of Bern
664-800 kyr BP: Dome C (Luethi et al. (sub)) measured at University of Bern

Age scale is gas age

I assume the gas age takes into account the delay in trapping?

The age used is EDC3 and a comparison between dome fuji and vostok is here
The EDC3 chronology for the EPICA Dome C ice core

gas to ice age 0-41k
“Although the exact causes of the 1arge
overestimate remain unknown, our work implies that the suggested
lag of CO2 on Antarctic temperature at the start of the
last deglaciation has probably been overestimated.”


from 1941
Resistance is mentioned but not seen at that time.


DDT is a persistent poison – it does not quickly break down to safe compounds.

Mosquitoes breed rapidly and DDT resistant strains were developing. To continue to spray DDT to eradicate the non resistant mosquitoes would be pointless. Why poison the world eradicating fewer and fewer mosquitoes

From 1952:
Resistance to DDT and dieldrin and concern over their environmental impact led to the introduction of other, more expensive insecticides. As the eradication campaign wore on, the responsibility for maintaining it was shifted to endemic countries that were not able to shoulder the financial burden. The campaign collapsed and in many areas, malaria soon returned to pre-campaign levels

an interesting bit:
DDT killed some and not other bugs leading to bed bugs ! etc.
In Malaysian villages, the roofs of the houses were a thatch of palm fronds called atap. They were expensive to construct, and usually lasted five years. But within two years of DDT spraying the roofs started to fall down. As it happened, the atap is eaten by caterpillar larvae, which in turn are normally kept in check by parasitic wasps. But the DDT repelled the wasps, leaving the larvae free to devour the atap.
In Greece, in the late nineteen-forties, for example, a malariologist noticed Anopheles sacharovi mosquitoes flying around a room that had been sprayed with DDT. In time, resistance began to emerge in areas where spraying was heaviest. To the malaria warriors, it was a shock. “Why should they have known?” Janet Hemingway, an expert in DDT resistance at the University of Wales in Cardiff, says. “It was the first synthetic insecticide. They just assumed that it would keep on working, and that the insects couldn’t do much about it.”

Human exposure
Analysis of human fat has been carried out occasionally in the UK showing that DDT can persist for many years. Analysis of 203 samples of mostly renal fat showed 99% contained detectable residues of DDT (see table 3)(24). Many of the levels found are above effect-level exposures required to elicit a carcinogenic response in test animals (see mice studies above). They are also well above the life-time safety exposure limit ADI of 0.02 mg/kg body weight.

DDT and its metabolites can lower the reproductive rate of birds by causing eggshell thinning which leads to egg breakage, causing embryo deaths. Sensitivity to DDT varies considerably according to species(35). Predatory birds are the most sensitive. In the US, the bald eagle nearly became extinct because of environmental exposure to DDT. According to research by the World Wildlife Fund and the US EPA, birds in remote locations can be affected by DDT contamination. Albatross in the Midway islands of the mid-Pacific Ocean show classic signs of exposure to organochlorine chemicals, including deformed embryos, eggshell thinning and a 3% reduction in nest productivity. Researchers found levels of DDT in adults, chicks and eggs nearly as high as levels found in bald eagles from the North American Great Lakes(36).

Many insect species have developed resistance to DDT. The first cases of resistant flies were known to scientists as early as 1947, although this was not widely reported at the time(39). In the intervening years, resistance problems increased mostly because of over-use in agriculture. By 1984 a world survey showed that 233 species, mostly insects, were resistant to DDT(40). Today, with cross resistance to several insecticides, it is difficult to obtain accurate figures on the situation regarding the number of pest species resistant to DDT

40 years ago, in 1969, DDT was freely available world wide. Sweden banned the stuff from agricultural use in 1970; the U.S. followed with a ban on agricultural use of DDT, especially sprayed from airplanes. DDT for fighting malaria has always been a feature of the U.S. ban. As a pragmatic matter, DDT manufacture on U.S. shores continued for more than a dozen years after the restrictions on agricultural use of the stuff. In an ominous twist, manufacture in the U.S. continued through most of 1984, right up to the day the Superfund Act made it illegal to dump hazardous substances without having a plan to clean it up or money to pay for clean up — on that day the remaining manufacturing interests declared bankruptcy to avoid paying for the environmental damage they had done. See the Pine River, Michigan Superfund site, or the Palos Verdes and Montrose Chemical Superfund sites in California, the CIBA-Geigy plant in McIntosh, Alabama, and sites in Sand Creek, Colorado, Portland, Oregon, and Aberdeen, North Carolina, for examples.

Toxicity Rating Oral Acute LD50 for Rats
Extremely toxic 1 mg/kg or less (e.g., dioxin, butulin toxin)
Highly toxic 1 to 50 mg/kg (e.g., strychnine)
Moderately toxic 50 to 500 mg/kg (e.g., DDT)
Slightly toxic 0.5 to 5 gm/kg (e.g., morphine)
Practically nontoxic 5 to 15 gm/kg (e.g., ethyl alcohol)

DDT was abandoned not because of greenies but because
it was becoming ineffective
It was killing other beneficial bugs.
the money dried up
It was being improperly applied

Rehabilitating Carson
John Quiggin
24th May 2008 — Issue 146
Why do some people continue to hold Rachel Carson responsible for millions of malaria deaths?
Click here to read a reply from Roger Bate of Africa Fighting Malaria

Rachel Carson launched the modern environmental movement. She was posthumously awarded the US presidential medal of freedom, and has conservation areas, prizes and associations named in her honour.

Yet Carson has also been accused of killing more people than Hitler. Her detractors hold her responsible for a “ban” on the use of the insecticide DDT (Dichlorodiphenyltrichloroethane), which, they claim, halted a campaign that was on the verge of eradicating malaria in the 1960s.

Some mainstream journalists have accepted this story, which in turn has led to pressure on the World Health Organisation (WHO) and other bodies to change policies and personnel. Yet perhaps the most striking feature of the claim against Carson is the ease with which it can be refuted. It takes only a few minutes with Google to discover that DDT has never been banned for anti-malarial uses, and that it is in use in at least 11 countries.

It takes only a little more time to discover that the postwar attempt to eradicate malaria by the spraying of DDT was a failure, largely because Carson’s warnings that overuse of insecticides would lead to the development of resistance in mosquito populations were ignored. Modern uses of insecticides are far closer to the methods advocated by Carson than to the practices she criticised.

How, then, did the idea that Carson was responsible for millions of deaths gain currency? Any good myth requires a few grains of truth, and the DDT malaria story has a couple. First, the 2001 Stockholm convention on persistent organic pollutants prohibits the use of DDT except for disease control, and calls for all DDT use to be phased out. The phase-out commitment is often loosely referred to as a “ban.”

Second, by virtue of its massive misuse in the 1960s and 1970s, DDT gained a bad reputation that was hard to shake. As a result, says WHO’s Allan Schapira, donors have sometimes insisted on the use of an insecticide other than DDT, even in “countries where the government wished to use DDT, and there was evidence that it was the best option for malaria-vector control.”

But these grains of truth are scarcely enough to generate a myth as widespread as that of “Rachel Carson, baby killer.” So what accounts for the campaign against Carson? The story begins in the 1940s. Swiss chemist Paul Hermann Müller won the 1948 Nobel prize for medicine for his discovery of the efficacy of DDT against several arthropods. It was used in the war by Allied forces with striking success to protect troops and civilians from the insects that transmit malaria, typhus and other diseases.

After the war, the use of DDT continued apace. In 1955, the WHO adopted a global malaria eradication campaign, based on spraying DDT on to the interior walls of houses to protect residents against malaria-carrying mosquitoes. The insecticide was well suited for this task because it was “persistent,” meaning that a wall sprayed with DDT would kill mosquitoes that rested on it for six months after spraying.

The programme was never extended to sub-Saharan Africa, where malaria was most acute. But the failure of the programme was not the result of underuse: quite the opposite. In the first flush of enthusiasm for DDT in the 1950s, the range of applications was rapidly extended from disease control to agricultural and other uses. DDT was widely used in both developed and less developed countries to protect crops against pests. Indiscriminate use in agriculture led to the evolution of DDT-resistant mosquitoes.

Rachel Carson, a prominent American science writer, had long been concerned about the impact of DDT and other pesticides on the environment. In 1962, her ideas were crystallised in the bestseller Silent Spring, which made the case that overuse of pesticides was a threat to wildlife, human health and even their own usefulness against malaria. Within a year, the US president’s science advisory committee called for a reduction in the use of persistent pesticides. In 1972, the use of DDT in US agriculture was banned, though an exception (which has never been used) was made for emergency public health applications.

Meanwhile, the DDT-based eradication campaign against malaria ran into the trouble Carson had warned about. The high-water mark of the campaign came in 1964. Sri Lanka had reduced its number of malaria cases from millions after the end of the war to just 29. The country declared victory over malaria and suspended spraying. WHO called the eradication programme “an international achievement without parallel in the provision of public health service.”

But then malaria returned to Sri Lanka. In 1968-69, there were half a million cases. The country went back to spraying DDT, but because it had been extensively used in agriculture, mosquitoes had evolved resistance. The insecticide became less and less effective, eventually forcing Sri Lanka to switch to an alternative, malathion, in the mid-1970s. Other countries in the eradication program suffered similar setbacks, and by 1969, the 22nd World Health Assembly concluded that the goal of global eradication of malaria was not feasible.

By 1990, it seemed that the public health issues surrounding DDT had been largely resolved. In developed countries, DDT had been replaced by less environmentally damaging alternatives. But soon the situation changed radically. The tobacco industry, faced with the prospect of bans on smoking in public places, sought to cast doubt on the science behind the mooted ban. But a campaign focused on tobacco alone was doomed to failure. So the industry tried a different tack, an across-the-board attack on what it called “junk science.” Its primary vehicle was the Advancement of Sound Science Coalition (TASSC), a body set up by PR firm APCO in the early 1990s and secretly funded by Philip Morris.

TASSC, led by an activist named Steve Milloy, attacked the environmental movement on everything from food safety to the risks of asbestos. One of the issues Milloy took up with vigour was DDT, where he teamed up with the entomologist J Gordon Edwards. With the aid of Milloy’s advocacy, Edwards’s attacks on Rachel Carson moved from the political fringes to become part of the orthodoxy of mainstream US Republicanism.

Tobacco companies created a European version of TASSC, the European Science and Environment Forum (ESEF), led by Roger Bate, another tobacco lobbyist. In the late 1990s, Bate established “Africa Fighting Malaria,” a so-called “astroturf” organisation based in Washington DC. His aim was to drive a wedge between public health and the environment by suggesting that by banning DDT to protect birds, environmentalists were causing many people to die from malaria. Between them, Milloy’s TASSC and Bate’s Africa Fighting Malaria convinced many that DDT was a panacea for malaria, denied to the third world by the machinations of rich environmentalists.

For both groups, the big opportunity came with the 2001 Stockholm convention. The treaty banned most uses of organochlorine pesticides such as DDT and dieldrin because of their persistence and toxicity. It was agreed that the use of DDT for malaria control should be exempted from the general ban until affordable substitutes could be found. The only point of dispute was whether an explicit target date for the phase-out of DDT should be set.

The pro-DDT brigade pounced on the phase-out proposal, describing it as a “ban,” and news stories made it appear that the ban was imminent. Yet during the negotiations, the World Wide Fund for Nature, the main supporters of a targeted phase-out date, abandoned its proposal, focusing instead on more stringent attempts to control the illegal use of DDT in agriculture. The outcome was an eminently sensible one.

But the debate had given Milloy, Bate and others the start they needed. Successfully conflating the use of the term “ban” to describe the eventual phase-out proposal with the 1972 ban on agricultural use in the US, Milloy produced his “malaria clock,” which blamed the “ban” for all malaria deaths since 1972. Meanwhile, Bate and Africa Fighting Malaria continued to claim that the widespread use of DDT was being prevented solely because of the opposition of western environmentalists.

The high point of the pro-DDT campaign came in late 2006, with the appointment of Arata Kochi as head of the WHO malaria program. Kochi saw the need to placate the critics associated with the Bush administration, and issued an announcement describing a renewed commitment to DDT. The “new” position was little more than a restatement of long-standing policy. Nevertheless, it appeased critics and mobilised support for additional funds, particularly from the US government.

Kochi’s announcement was hailed as a triumph by the promoters of the DDT myth. But the environmentalists and scientists started fighting back. The idea of DDT spraying as a panacea for malaria threatened, they said, to derail the tentative progress that was being made with campaigns incorporating improved treatments, insecticide-treated bed nets and a range of public health measures.

The response of the pro-DDT campaigners was revealing. Milloy, who has long shown himself to be utterly shameless, maintained his malaria clock, adding footnotes to indicate that he knew his claims to be false. By contrast, Bate began to adjust his position, noting in a recent interview that, “I think my position has mellowed, perhaps with age.”

Sanity now appears to be returning to the malaria debate. At meetings on the implementation of the Stockholm convention, WHO put out another restatement of its position, this time stressing the commitment to an eventual phase-out of DDT, while noting that its use would continue until adequate substitutes were found.

In 2007, the WHO concluded that long-lasting insecticide treated bed nets were more cost-effective than DDT spraying in high malaria transmission areas. Earlier this year, it announced dramatic progress against malaria in Rwanda and Ethiopia based on a strategy of long-lasting insecticidal nets and artemisinin-combination therapy drugs.

Following these successes, the goal of global eradication of malaria has been revived. The hope is that a combination of existing measures, like bed nets, insecticides (including DDT among others) and drugs, can drive down the number of cases and shrink malaria’s range across Africa and Asia. The new strategy is based on a judicious mix of tactics against malaria, rather than a knockout blow based on a single weapon. Rachel Carson would surely have approved