Today was Save Kepler Day at Camp Hogg. Through a remarkable set of fortunate events, I had Barclay (Ames), Fergus (NYU), Foreman-Mackey (NYU), Harmeling (MPI-IS), Hirsch (UCL, MPI-IS), Lang (CMU), Montet (Caltech), and Schölkopf (MPI-IS) all working on different questions related to how might we make Kepler more useful in two-wheel mode. We are working towards putting in a white paper to the two-wheel call. The MPI-IS crew got all excited about causal methods, including independent components analysis, autoregressive models, and data-driven discriminative models. By the end of the day, Foreman-Mackey had pretty good evidence that the simplest auto-regressive models are not a good idea. The California crew worked on target selection and repurpose questions. Fergus started to fire up some (gasp) Deep Learning. Lang is driving the Tractor, of course, to generate realistic fake data and ask whether what we said yesterday is right: The loss of pointing precision is a curse (because the system is more variable) but also a blessing (because we get more independent information for system inference).
One thing about which I have been wringing hands for the last few weeks is the possibility that every pixel is different; not just in sensitivity (duh, that's the flat-field) but in shape or intra-pixel sensitivity map. That idea is scary, because it would mean that instead of having one number per pixel in the flat-field, we would have to have many numbers per pixel. One realization I had today is that there might be a multipole expansion available here: The lowest-order effects might appear as dipole and quadrupole terms; this expansion (if relevant) could make modeling much, much simpler
The reason all this matters to Kepler is that—when you are working at insane levels of precision (measured in ppm)—these intra-pixel effects could be the difference between success and failure. Very late in the day I asked Foreman-Mackey to think about these things. Not sure he is willing!