Among other discussion topics, Bovy and I debated the usefulness of the fact that if you have N pieces of data about an object, each of which says something about, say, the question of whether that object is a quasar, you can multiply together the N independently calculated likelihoods for quasar and for star. This sounds great, but it only makes sense when the N pieces of data tell you independent things. That is, it is only true if the joint probability of all the data equals the product of the probabilities of each data item separately. This is almost never the case!
In related news, we figured out why Zolotov and I are having trouble computing the Bayes factor for our project: Bayes factors are hard to compute! And they are also very slippery, because a small change to an uninformative prior—one that makes no change whatsoever to the posterior probability distribution—can have a huge impact on the Bayes factor. Once again, I am reminded that model selection is where I part company from the Bayesians. If you don't have proper, informed priors, you can't compute marginalized relative probabilities of qualitatively different models. It is cross-validation and data-prediction only that is reliable in most real situations.