Some Interesting Figures (II)

Continuation of Figures , mostly these posts serve as pointers to myself, but some may find these useful. Pictures you wont see at RC.

  1. Mann et al 2008
  2. Continue reading ‘Some Interesting Figures (II)’ »

Predicting Temperatures

On August 27th 08, before August HadCRUT nh+sh temperature was available, I posted this prediction ( http://signals.auditblogs.com/files/2008/08/gmt_pred.txt ) at CA:

After four months, it is good to check how well the simple half-integrated-white-noise model is doing. Predictions for these 4 months were

Year        Month     -2 sigma     Predict.     +2 sigma
2008              8       0.15206     0.34864      0.54521
2008              9       0.1141       0.33388      0.55365
2008             10      0.094005    0.32581      0.55762
2008             11      0.080432    0.32024      0.56005

and observations today 15th Dec 08 on HadCRU website are the following:

2008/08 0.396
2008/09 0.374
2008/10 0.438
2008/11 0.387

Here’s how these fit to the original figure:

gmt_pred_upd.png

The model is doing quite good work. I’ll tell you when we reach the upper bound of the prediction interval. And after temperatures go permanently above that bound, AGW kicks this model to the trash can.

Moore et al. 2005

I originally planned to write a long post about signals and noise, but I guess it is better to focus on tiny details and write a book later.  Article New Tools for Analyzing Time
Series Relationships and Trends
by J. C. Moore, A. Grinsted, and S. Jevrejeva (Eos, Vol. 86, No. 24, 14 June 2005) got some attention in David Stockwell’s blog,  http://landshape.org/enm/rahmstorf-et-al-2007-ipcc-error/ . Very interesting article, but I’m afraid there’s something wrong with statements as

A wise choice of embedding dimension can be made with a priori insight or perhaps more commonly may be found by simply playing with the data.

Specially, Figure 3. of that article caught my eye:

Continue reading ‘Moore et al. 2005’ »

Hockeystick for Matlab

Here’s the version 1.1: hockeystick1.txt

UPD Jan 2010: change

urlwrite(’http://www.climateaudit.org/data/mbh99/proxy.txt’,'proxy.txt’)

to

urlwrite(’http://www.climateaudit.info/data/mbh99/proxy.txt’,'proxy.txt’)

Some notes:

  • Download to empty folder and rename to hockeystick.m
  • Program downloads necessary data from the web (once), uses urlwrite.m (newish Matlab needed)
  • It’s a script
  • Shows what PC1_fixed does
  • Only one file is downloaded from CA (AD1000 proxies), sorry RC, but I don’t know where to find morc014 elsewhere..
  • Pl. tell me if it works or not, uc_edit at yahoo.com !

Updated to Ver 1.1, added cooling trends:

fig1.1.png

Some Interesting Figures

While discussing at CA, I’ve made some figures that are spread around CA posts. Here’s a collection of the interesting ones, along with link to CA post in question. All those seem to be related to Dr. Mann’s work. I wonder why..

  1. Re-scaling the Mann and Jones 2003 PC1
  2. Continue reading ‘Some Interesting Figures’ »

Multivariate Calibration (II)

In the previous post, I mentioned that Juckes et al INVR is essentially CCE. In addition, it was noted that CCE is not ML estimator and that Brown82 shows how to really compute confidence region in multivariate calibration problems. As Dr. Juckes made a good job of archiving his results, we can now compare his CCE (S=I) and ML estimator results Brown’s confidence region (with central point as point estimate) .

Continue reading ‘Multivariate Calibration (II)’ »

Multivariate Calibration

In calibration problem we have accurately known data values (X) and a responses to those values (Y). Responses are scaled and contaminated by noise (E), but easier to obtain. Given the calibration data (X,Y), we want to estimate new data values (X’) when we observe response Y’. Using Brown’s (Brown 1982) notation, we have a model

 Y=\textbf{1}\alpha ^T + XB + E (1)

 Y'=\alpha ^T + X'^T B + E' (2)

where sizes of matrices are Y (nXq), E (nXq), B(pXq), Y’ (1Xq), E’ (1Xq), X (nXp) and X’ (pX1). \textbf{1} is a column vector of ones (nX1). This is a bit less general than Brown’s model (only one response vector for each X’). n is length of the calibration data, q length of the response vector, and p length of the unknown X’. For example, if Y contains proxy responses to global temperature X, p is one and q the number of proxy records.

In the following, it is assumed that columns of E are zero mean, normally distributed vectors. Furthermore, rows of E are uncorrelated. (This assumption would be contradicted by red proxy noise.) The (qXq) covariance matrix of noise is denoted by G. In addition, columns of X are centered and have average sum of squares one.

Continue reading ‘Multivariate Calibration’ »

UC’s Millennium Problems

  1. How are those MBH99 uncertainties estimated?
  2. How many meteorological stations would be needed to beat the uncertainty levels of MBH99?
  3. If you don’t have a prior distribution of the signal, and observe signal+noise (noise independent of the signal), what kind of estimator yields a reconstruction that has a smaller sample variance than the true signal?
  4. How to define / measure natural variability ?
  5. Where do we need evolving multivariate regression ?
  6. Calibration: ICE, CCE or maybe even CVM. Why Kendall’s ATS claims that once the model is clearly stated, the choice of estimator follows directly ?