My goal this year in Heidelberg is to move forward all writing projects. I didn't really want to start new projects, but of course I can't help myself, hence the previous post. But today I crushed the writing: I wrote four pages in the book that Rix (MPIA) wants me to write, and I got more than halfway done with a Templeton Foundation pre-proposal that I'm thinking about, and I partially wrote up the method of the robust dimensionality reduction that I was working on over the weekend. So it was a good day.
That said, I don't think that the iteratively reweighted least squares implementation that I am using in my dimensionality reduction has a good probabilistic interpretation. That is, it can't be described in terms of a likelihood function. This is related to the fact that frequentist methods that enforce sparsity (like L1 regularization) don't look anything like Bayesian methods that encourage sparsity (like massed priors). I don't know how to present these issues in any paper I try to write.
No comments:
Post a Comment