Wavelets. Yes, you thought they were cool in the 1980s. Apparently they are back. Almost every statistician at this workshop has mentioned them to me, and after two days of hard work, I am convinced: If you wavelet transform the space in which you do your inference, you can lay down an independent (diagonal) Gaussian Process but still produce exceedingly nontrivial covariance functions. We are looking at whether we can model the Kepler data this way. The idea is to use the wavelets to make it possible to marginalize out all possible stellar variability consistent with the data.
Baines (Davis) has been instrumental in my understanding, after Carter (CfA) and Dawson (CfA) taught me the basics by battling it out about implementations of the wavelet transform. I am still working on the linear algebra of all this, but I think I am going to find that one issue is that—in order for the method to be fast and scalable—the wavelet part of the problem has to treat the data as uniformly sampled and homoskedastic. Those assumptions are always wrong (do I repeat myself?) but I think I decided today that we can just pretend they are true for the purposes of our rotation into the wavelet basis. The rest of the working group kicked ass while I had these deep, deep thoughts.
Hey Hogg - If you want to relax the equally spaced assumption check out Nason (2002) Stat Comp for a fast solution that is still scalable or you can go to imputation.
ReplyDeleteIf you are willing to spend more time on computation, you can use a continuous wavelet transform - see
Chu, Clyde, Liang (2009) Statistica Sinica
http://www3.stat.sinica.edu.tw/statistica/j19n4/j19n44/j19n44.html
Clyde and George (2000) JRSS B explore what happens with non-Gaussian errors in the time domain and discrete wavelet transforms (OK not quite homoskedastic, but some relaxation of assumptions :-)
(still fast and scalable!)
cheers,
Clyde (Duke :-)