marginalizing a model of astronomers

Lang and I worked on the Comet Holmes project more today. We encountered an interesting issue when we tried to marginalize out the time at which an image was taken: If you think an image was taken of a comet (and we do), and you don't know either the orbit of the comet or the time at which the image was taken (and we don't, by construction), then you are inclined to infer a slowly-moving comet! This comes from the fact that the only sensible likelihood (probability of the image given the comet parameters) involves a marginalization over times, and more time gets into each image the slower the comet goes. A slow comet is a distant comet, and that is a less observable (and less likely to be observed) comet, so we are doing something wrong, but it is not trivial to find a principled solution to this one. Bayesians out there? This is a general issue for all parametric curve fitting is it not?


  1. I'll ask the obvious question: what are your priors? In particular, on the times? Presumably your model of astronomers should include the distribution of times at which they image the comet, and your prior on the individual times should be hierarchical.

    Also, what are the priors on the orbital elements and the brightness of the comet? When you say "A slow comet is a distant comet, and that is a less observable (and less likely to be observed) comet" it sounds like you have a prior belief that you are not putting into the model.

  2. Here's one type of model to consider: "person decides to photograph comet so 1) works out when they'll be able to see it, 2) points camera to wherever it is and takes photo". Then it doesn't matter how fast the comet is moving, the image will happen and the comet will probably be near the centre.

    (I can't actually follow what your model is. More detail might help understand what looks like it might be an interesting issue.)