Twitter | Search | |
John Kennedy
Playing with data from the OHC paper. I get their gradient for the regression (1.16) by weighting the data by 1/unc^2. This forces gradient to match the early data and overshoot the later, less certain, data. Other weightings give lower gradients. 1/2
Reply Retweet Like More
John Kennedy Nov 3
Replying to @micefearboggis
On the other hand, the standard error of the gradient parameter comes out smaller - 0.05 rather than 0.15 as in the paper - so clearly they did something different. 2/2
Reply Retweet Like
Gavin Schmidt Nov 6
Replying to @micefearboggis
Maybe a correction for the autocorrelation in the residuals (rho=~0.6)?
Reply Retweet Like
John Kennedy Nov 6
Replying to @ClimateOfGavin
Maybe. Possibly they took the trends for each of their realisations and used the spread of those as the uncertainty?
Reply Retweet Like
Dr Doug McNeall Nov 6
Replying to @micefearboggis
How did I miss this?!
Reply Retweet Like
John Kennedy Nov 6
Replying to @dougmcneall
Miss what?
Reply Retweet Like
Dr Doug McNeall Nov 6
Replying to @micefearboggis
That tweet. It must have been the weekend.
Reply Retweet Like
lucia liljegren Nov 6
Replying to @micefearboggis
What's 'unc' short for?
Reply Retweet Like
John Kennedy Nov 6
Replying to @lucialiljegren
Ah, sorry. unc = uncertainty
Reply Retweet Like
lucia liljegren Nov 6
Replying to @micefearboggis
Thanks. (I guessed that soon after. I clicked over to Judy's blog and saw a section called "uncertainty analysis" and thought.... oh. bet it's that.)
Reply Retweet Like
lucia liljegren Nov 6
Replying to @micefearboggis
I don't have the paper. (And don't need.) But when you say you did that weighting, are the uncertainties used in the weightings estimated kinda sorta like the ones in Hadcrut and so on?
Reply Retweet Like
lucia liljegren Nov 6
Replying to @micefearboggis
(Also: why is more recent data more uncertain?)
Reply Retweet Like
lucia liljegren Nov 6
Replying to @micefearboggis
I posted a comment at Judy's blog highlighting your tweet. (And mentioned weighting by 1/unc^2 can be reasonable, though of course it should be stated in paper.) Waiting to see how this pans out.
Reply Retweet Like
John Kennedy Nov 6
Replying to @lucialiljegren
In a similar vein, I guess. They generate 1000000 different data sets.
Reply Retweet Like
John Kennedy Nov 6
Replying to @lucialiljegren
I'm not entirely sure. Some of the errors are expressed as X per year, suggesting that they grow in time. Others are modelled as AR(1) series, which would increase with time as they zero from the first data point.
Reply Retweet Like
Zeke Hausfather Nov 6
What is abundantly clear is that the description of what they did in the methods is woefully insufficient (and Nature length restrictions are not really an excuse here):
Reply Retweet Like
John Kennedy Nov 6
There is a lot of detail in the description of the uncertainty analysis, which is nice, and the extended data has useful info, but there do seem to be some gaps. I've not tried to work it through though. It might be that some steps are "obvious" if you follow the analysis.
Reply Retweet Like
Roger Pielke Jr. Nov 6
Agreed. Very unclear as to procedure and physical basis. More fundamentally, if the goal is to compare like with like, then the trends in Fig.1 should be calculated apples to apples across datasets, otherwise risk differences be a function of statistical methods not real world.
Reply Retweet Like
David Berry Nov 6
1/2 I think I would have been that awkward reviewer with this, lots missing from the description of what they have done and they might be missing significant levels (2 sigma for trends, 1 sigma elsewhere)
Reply Retweet Like
David Berry Nov 6
2/2 mixing sig. lvls. In R, using md <- gls( apo ~ year, theData, weights = ~sigma^2, method='ML', correlation = corAR1(form=~1)) I get a trend of 1.158 ± 0.075 (at the 1 sigma level) or ± 0.15 at the 2 sigma level. I struggle with the last few years in the plot / fit.
Reply Retweet Like