2011-07-13

Intelligent Systems, day 3

In the morning we continued our discussions of multi-exposure imaging. I love the style of this computational imaging group: Work hard all day, but work equals sitting in the garden arguing! We particularly discussed what you could believe about a model made by forward-modeling through the PSF (that is, a deconvolution). My position is that because there are near-degeneracies in such modeling, you have to return a posterior probability distribution over deconvolved images (or probably a sampling of that); Fergus thought it might be possible to make an adaptive model complexity designed to maintain unimodality in the posterior PDF. Either way, representing the posterior PDF is not going to be trivial! We postponed all such issues to subsequent projects; we have a scope for a first paper that skirts such issues.

In the afternoon, Christopher Burger, Stefan Harmeling, and I discussed making probabilistic models of CCD bias, dark-current, flat, and read-noise frames, from a combination of zero, dark, flat, and science data. We decided to make some experiments with a laboratory CCD camera and, if they work, repeat them with archival HST data.

3 comments:

  1. You may want to also consider the Keck observatory archive (I forget if I've suggested this before). That way, you'll have access to about 15 years worth of data over 3 different detectors which will have different characteristics than the HST ones.

    ReplyDelete
  2. Deconvolve the PSF. Never fit a PSF convolved model.

    ReplyDelete
  3. Deconvolving the PSF requires fitting a PSF convolved model anyway. i.e. it is the process of finding models that, after convolution, agree with the data.

    ReplyDelete