2012-08-14

Valchava workshop, day three

After another morning hack session where the only real progress was getting spectral classification information for Muandet, Rob Fergus and I talked about astronomy challenges of relevant to the computer vision types. Fergus talked about our project on high dynamic-range imaging, where the problem is to make a data-driven model of an extremely complicated and time-variable point-spread function when we have very few samples. This wouldn't be hard if we didn't care about finding exoplanets that are much much fainter than the speckley dots in the PSF. The advantage we have with the P1640 data we have been using is that the speckles (roughly) expand away from the optical axis with wavelength, whereas faint companions are fixed in position.

I talked about what I see are three themes in astronomy where astronomers need help: (1) In large Bayesian inferences, we need methods for finding, describing, and marginalizing probability distributions. (2) We have many problems where we need to eventually combine supervised and unsupervised methods, because we have good theories but we can also see that they fail in some circumstances or respects (think: stellar spectral models). (3) The training data we use to learn about some kind of objects (think: quasars) is never the same in signal-to-noise, distance, brightness, redshift, spectral coverage, or angular resolution as the test data on which we want to run our classifications or models next. That is, we can't use purely discriminative models; we need generative models. That's what XDQSO did so well.

On the last point, Schölkopf corrected me: He thinks that what I need are causal models, which are often—but not always—also generative. I am not sure I agree but the point is interesting.

No comments:

Post a Comment