I arrived at the MPI-IS in Tübingen to spend two days talking about image modeling with Schölkopf, Harmeling, and Kuhlmann. A lot of what we are talking about is the possibility of saving Kepler, where our big idea is that we can recover lost precision (from loss of pointing accuracy) by modeling the images, but we also talked about radio interferometry. On the Kepler front, we discussed the past Kepler data, the precision requirements, and the problems we will have in modeling the images. One serious problem for us is that because Kepler got its precision in part by always putting the stars in the exact same places in the CCD every exposure, we don't get the kind of data we want for self-calibration of the detector and the PSF. That's bad. Of course, the precision of the whole system was thereby made very good. In two-wheel mode (the future), the inevitably larger drift of the stars relative to the CCD pixels will be a curse (because the system won't be perfectly stable and stationary) but also a blessing (because we will get the independent information we need to infer the calibration quantities).
On the radio-interferometry front, we discussed priors for image modeling, and also the needs of any possible "customers" for a new radio-interferometry image-construction method. We decided that among the biggest needs are uncertainty propagation and quantification of significance. These needs would be met by propagating noise, either by sampling or by developing approximate covariance-matrix representations. In addition, we need to give investigators ways to explore the sensitivities of results to priors. We came up with some first steps for Kuhlmann.
In the afternoon, I spoke about data analysis in astrophysics to a group of high-school students interested in machine learning.