information content of an image

Schiminovich and I spent the morning working on UV background stuff; coding and testing our marginalized likelihood formalism for the mean-SED-fitting of quasars as a function of redshift. Over lunch, we discussed the information content in an astronomical image or project. I have a general formalism for computing this (it is related to the image modeling I have been doing with Lang), but haven't written it up yet. The upshot of any serious consideration of the problem is that any new image brings very little new information about quantities of interest; science moves forward on the basis of very small changes in our information content about the world. One example is that the WMAP project has had huge impact while only providing a few bits of information about the cosmological parameters! Many in the next generation of projects work in the time domain. Will the information provided justify the costs? In some sense I am betting it will, but given how little has been done in the time domain, we don't really know.


fitting HST photometry

[No posts for a few days because I was on travel.]

Lang and I started fitting PHAT photometry of hot stars in M31 with dust-attenuated blackbodies to make a crude color–magnitude diagram for the hottest stars. This is a baby step towards determining the IMF.


image subtraction

Armin Rest (STScI) spent the day at NYU, so we discussed image subtraction, which he has used to great effect finding supernovae and—most impressively—light echos from old supernovae. Done properly, image subtraction is very impressive. Of course Lang and I hope to beat it with direct image modeling, but the subtractors (Rest included) think we have no chance.


Stripe-82 photometry

Foreman-Mackey and I discussed some details of his photometry project in the massive SDSS Stripe-82 time-domain data set, where we hope to make variability measurements (with Lang, Willman, Fadely) below the individual-exposure detection limit. We are starting in a forced-photometry framework; that is, we assume we know where all the interesting sources are and the only problem is to get them measured in the low signal-to-noise data. Our biggest short-term challenge is calibration; everyone who has worked in the non-science-grade runs in Stripe 82 previously has made her or his own calibration. We hope to release ours publicly.



mixtures of Gaussians

The deVaucouleurs profile is a horror! Anyone who has tried to accurately sample it on a grid knows this horror: It requires enormous numbers of samples at small radius and at the same time samples out to enormous radius if you want total flux to be right (which you don't, but that argument is too annoying to make here). These issues could be eased if we never had to sample the deV profile, but only the seeing-convolved deV profile, which is well-behaved on all interesting scales in well-sampled imaging (by assumption). I have a solution to these problems, which is to make a mixture-of-Gaussians approximation to the deV profile. If you also have a mixture-of-Gaussians approximation to the point-spread function then seeing-convolution is analytic and you never have to render the unconvolved nasty thing. Lang and I started implementation of this approach in the Tractor today, but we are far from done. I am so excited about this technical breakthrough I might have to write a (highly obscure) paper about it.


average flux ratios

Schiminovich and I hid out in Manhattan to pair-code and pair-write today on the intergalactic ultraviolet radiation field. We have pretty good results from co-adding flux from many quasars in narrow redshift bins, but we realized that we might be able to do better by doing a more proper inference. We wrote down a likelihood with some nuisance parameters relating to the distribution of quasar spectral energy distributions, and then marginalized and optimized to compare to the (less justified) co-adding procedure. Results aren't in yet, but it looks like results are similar, but much better justified. Was that worth a day of work? Not sure, but I definitely enjoyed it!



This afternoon Roman Rafikov (Princeton) gave a nice talk about metallicity anomalies in white dwarfs, with a probable connection to accretion from dust or proto-planetary-like disks. It was all good, but the result I will take to the grave was about accretion induced by Poynting–Robertson drag: Because it is a pure special-relativity effect, the mass accretion rate (mass per unit time) is just the intercepted luminosity (energy per unit time) divided by two factors of the speed of light. That is, it is a perfect converter of energy into mass!



I spoke in Sarah Bridle's group meeting at UCL today by videocon. The technology worked very well. I talked about modeling astrophysics data, with massive internal marginalizations.


cold flows

Hennawi gave a nice talk at NYU today, showing his attempts to see cold flows in absorption along lines of sight to background quasars. He finds some strange stuff: It looks like cold, dense gas, but it is very high in metallicity. He is working on damped Lyman-alpha absorbers associated with foreground quasars, which we know to be in massive halos. Intriguing and confusing stuff.


XD photo-z

After a nice black-board talk by Bovy about quasar target selection, he, Hennawi (visiting from MPIA), and I worked out a more general photometric redshift framework that could be applied to galaxies or quasars or anything else. It involves expanding the photometric space up to a much larger-dimensional spectrum space, and then, at each redshift, for each band, the observation is a projection of the high-dimensional space (plus noise). For any individual object there is no non-degenerate fit in the higher-dimensional space, but as long as there is large redshift-diversity, a collection of galaxies can constrain a prior on the spectrum space that would make the redshift inference optimally hierarchical. Love it! But it won't be easy to implement.



Ned Wright (UCLA) gave a great talk in the astro seminar slot on WISE, which has done an all-sky mid-infrared survey and is releasing data this spring. It is a beautiful project with beautiful results and many more to come.


four survey strategies

Holmes and I made two new survey strategies for our self-calibration project, and improved the random strategy to make it far more homogenous (previously it was Poisson-distributed, now we made it more uniform without giving up randomness). We got everything together we need to write paper 1, except for the dependencies of the calibration quality on things like the density of standard stars, parameters of the noise model, and so-on.


dynamical models

Widrow, Foreman-Mackey, and I spent a bit of the morning discussing the best approaches to dynamical modeling in the short term. I am in favor of non-parametric modeling of the distribution function, and then marginalization. This is hard, because non-parametric models have enormous numbers of parameters. But I don't see any other principled method, and the nuisance parameters definitely matter. However, Widrow suggested that we could start with parameterized models, based on things that are known about potential-density pairs. This is a good idea, and we can use it to test assumptions; in particular we can see if, in toy systems, you get wrong answers about the potential if you use overly simplistic distribution function models.


modeling galaxies, flat-fields

Larry Widrow (Queen's) gave a very nice informal talk about his work modeling galaxies as self-consistent dynamical systems, with (axisymmetric) disk, bulge, and halo. He has a nice framework in which he can use all the (data) constraints and do MCMC fast, because his models are flexible but not N-body. He then takes a subsample of the acceptable galaxy models from the posterior PDF and integrates them with an N-body code; sure enough they are all unstable to spiral-arm and bar formation, but probably not inconsistent with the observed bar and arm sizes and amplitudes. He also showed some nice data on the M31 group, and indicated that recent results on non-sphericity of the Milky Way halo are almost certainly over-estimates.

Holmes and I made the flat-field in our Euclid simulations far more pessimistic (or conservative, if you prefer), giving it low-level power on a range of scales. We also switched our uber-calibration code so that it has slightly wrong estimates of the true noise variances affecting each star (and systematically wrong), and we made it such that the function space in which we are fitting does not match the function space in which the true flat-field was generated. That is, we made the whole system much more realistic. Results are even more stark that random observing strategies crush grid-like strategies for calibration.