I spent the weekend completely off the grid. I didn't even bring my computer or any device. That was a good idea, it turns out, even for doing research. I got in some thinking (and writing) on various projects: I sharpened up my argument (partially helped by conversations with various MPIA people last week) that you never really want to compute the Bayes evidence (fully marginalized likelihood). If it is a close call between two models, it is very prior-dependent and isn't the right calculation anyway (where's the utility?); if it isn't a close call, then you don't need all that machinery.

I worked out a well-posed form for the question "What fraction of Sun-like stars have Earth-like planets on year-ish orbits?". That question is not well-posed but there are various possible well-posed versions of it, and I think some of them might be answerable with extant *Kepler* data.

Along the same lines, I wrote up some kind of outline and division of responsibilities for our response to the *Kepler* call for white papers related to repurpose in the two-wheel era. I decided that our main point is about image modeling, even though we have many thoughts and many valuable things to say about target selection, field selection, cadence, and so on. When I get back to civilization I have to email everyone with marching orders to get this done.

Rix and I have a side project to find streams or kinematic substructures in Milky-Way stellar data of varying quality. It works by building a sampling of the possible integrals of motion for each star given the observations, as realistically as possible, and then finding consensus among different stars' samplings. I worked on scoping that project and adjusting its direction. I am hoping to be able to link up

stars in big Halo-star surveys into substructures.

I'll take the bait. :)

ReplyDelete"you never really want to compute the Bayes evidence (fully marginalized likelihood)"

Marginalizing over anything at all is equivalent to having calculated a lot of (ratios of) evidences.

" If it is a close call between two models, it is very prior-dependent and isn't the right calculation anyway (where's the utility?);"

Yes, it's prior dependent. That's a fact of life, I'm not bothered by it at all any more and I don't think anyone else should be.

It is the right thing to do if your question is "which model is more plausible", which is usually your question. If the question is "what decision should I make to maximise my expected utility" then if the space of possible decisions includes the space of possible Bayes factors you could state in your paper, most sensible scoring rules lead to the decision "just publish the one you actually calculated".

"if it isn't a close call, then you don't need all that machinery."

Agree here :) Sometimes just looking at a plot of the data is enough to tell you that one model is more plausible than another. This is implicit in most modelling where people do actually look at the data before deciding what the model should be.

I don't disagree that if you are asking about plausibility you have to integrate. The true Bayesian keeps both models with the plausibility weights (which involve, of course, marginalized likelihood calculations). If you are re-weighting your mixture of two qualitatively different models then I agree you have to do the integral; I have even done that. But most performances of the integral in astronomy (at least) are for decision-making, not mixture tuning. And no Bayesian can ever make a decision without a utility function. Dependence on the prior is not a problem if your priors *actually* represent your prior beliefs. In most projects I have seen, they don't.

ReplyDelete"But most performances of the integral in astronomy (at least) are for decision-making, not mixture tuning."

ReplyDeleteInteresting. I wonder if people get that idea from the misleading terminology "model selection". We should invent a new term for that, perhaps "model merging".

I think the implicit utility function for BMS is pretty obvious: 0-1 (loss-win), and has been discussed many times before.

ReplyDelete