As my loyal reader knows, Fergus and I have been working on data from Oppenheimer's (AMNH) 1640 coronograph. Fergus's model is a data-driven, empirical model of the speckle pattern as a function of wavelength, informed by—but not fully determined by—the physical expectation that the pattern should grow (in an angular sense) with wavelength. Fergus's model is very simple but at the same time competitive with the official data pipeline. Nonetheless, we had to make a lot of decisions about what we can and can't hold fixed, and what we can and can't assume about the observations. We resolved many of these issues today in a long meeting with Oppenheimer and Douglas Brenner (AMNH).
Complications we discussed include the following: Sometimes in the observing program, guiding fails and the star slips off the coronograph stop and into the field. That definitely violates the assumptions of our model! The spectrograph is operating at Cassegrain on the Palomar 200-inch, so as the telescope tracks, the gravitational load on the instrument changes continuously. That says that we can't think of the optics as being rigid (at the wavelength level) over time. When the stars are observed at significant airmass, differential chromatic refraction makes it such that the star cannot be centered on the coronograph stop simultaneously at all wavelengths. The planet or companions to which we are sensitive are not primarily reflecting light from the host star; these are young planets and brown dwarfs that are emitting their own thermal energy; this has implications for our generative model.
One more general issue we discussed is the obvious
point made repeatedly in computer vision but rarely in astronomy that astronomical imaging (and spectroscopy too, actually) is a bilinear problem: There is an intensity field created by superposing many sources and an instrumental convolution made by superposing point-spread-function basis functions. The received image is the convolution of these two unknown functions; since convolution is linear, this makes the basic model bilinear—a product of two linear objects. The crazy thing is that any natural model of the data will have far more parameters than pixels, because the PSF and the scene both are (possibly) arbitrary functions of space and time. Astronomers deal with this by artificially reducing the number of free parameters (by, for example, restricting the number of basis functions or the freedom of the PSF to vary), but computer vision types like Fergus (and, famously, my late colleague Sam Roweis) aren't afraid of this situation. There is no problem in principle with having more parameters than data!
I agree! Chang and my recent PSF interpolation scheme as 44,800 free parameters, and typically we have a hundred or two noisy data points. I'll have to show it to you this week.
ReplyDelete