2011-07-29

extremely faint tidal features

David Martinez-Delgado (MPIA) gave a beautiful seminar today about his work obtaining extremely flat and senstitive images of nearby galaxies, showing that many have faint, tidal remnants of past accretion events. The data are beautiful, thanks to his exquisite flat-fielding. He and his team made most of the images with 10 or 20 cm (yes, cm) telescopes!

During the rest of the day, Lang and I worked on source detection in multi-exposure imaging; we are pretty sure we can beat all of the standard methods, but we don't have a clear demo yet.

2011-07-28

sixty thousand parameters

In a very nice MPIA Galaxy Coffee presentation, Bovy noted that there are sixty thousand parameters in the XDQSO BOSS CORE quasar target selection, and three hundred thousand in the XDQSOz photometric redshift system. So we did some monster optimizations there! He showed how it all works and why. I finished the day paying scientific IOUs.

2011-07-27

inferring the noise

Hennawi and I spent part of the day pair-coding a simple toy code to infer the spectroscopic noise model given a few (extracted, one-dimensional) spectra of the same object taken with the same spectrograph. Hennawi wants a precise noise model for the SDSS spectrographs for Lyman-alpha forest clustering and IGM studies. We figured out a quasi-correct Gibbs sampling method, that draws posterior samples of the object's true spectrum, and given each of those, subsequently samples in noise-model parameter space. It seems to work! Now to run on real data, which contain a few additional complications (like non-trivial interpolation and non-Gaussian noise components like cosmic rays).

2011-07-26

more IMF, objects that moved

In the morning I had discussions with Gennaro (MPIA), Brandner (MPIA), and Stolte (Bonn) about their work on fitting the IMF in Milky Way young star cluster Westerlund 1. Dan Weisz and I were getting different results from them and I was interested in finding out why; there are significant methodological differences, but the same data. I think I see a path to resolving the discrepancies, but it depends on the method that Gennaro et al used to estimate their completeness as a function of position and magnitude.

In the afternoon, Lang and I worked on objects that don't match well between PHAT epochs. These are all likely to be either foreground stars with significant proper motions or else variable objects. By fitting out high-order distortions in the HST ACS camera, we can get few-milliarcsec per-source position uncertainties, so we are pretty sensitive even with PHAT's short internal time baseline.

2011-07-25

IMF fitting

I worked with Dan Weisz (Washington) on his IMF fitting code for much of the afternoon. We got it working, but we had to make some approximations with which we are not entirely happy. We started to get results on one Milky Way cluster with published masses, but we were surprised enough by our numbers to not trust them. We learned a lot in the pair-coding mode, though so I thoroughly enjoyed the coding session.

2011-07-24

Tractor with fixed sources

Lang and I stole some weekend time to get the Tractor to run on SDSS images but with strong priors on source locations. We are doing this for Myers, who has measured some quasar positions with the EVLA and wants to know what SDSS has to say about them, conditioned on the EVLA results. This is a baby step towards the problem: How do you analyze a big bag of heterogeneous imaging data at the angular resolution of the best data?

2011-07-22

post 1500, hierarchical classification

I spent much of the day working on toy examples for, and presentation for, my MPIA Hauskolloquium on classification. I argued that marginalized likelihood is the way to go (before applying your utility, that is, your long-term future discounted free cash flow model), but that for it to work well you need to learn the relevant priors from the data you are classifying. That is, if you are working at the bleeding edge (as you should be), the most informative data set you have (about, say, star–galaxy classification at 29.5 magnitude) is the data set you are using; if it isn't: Change data sets! Or another way to put it: Any labeled data you have to use for classification—or any priors you have—are based on much smaller, or much worse, data sets. So for my seminar I argued that you should just learn the priors hierarchically as you go. I demonstrated that this program all works extremely well, in realistic demos that involve fitting with wrong and incomplete models (as we do in the real world).

[This is post 1500. And this exercise of daily blogging still is a valuable (to me) part of my practice.]

2011-07-21

HST centroiding; stochastic galaxies

How to measure precisely the position of a star in an HST ACS image? The imager is not well sampled, so you can't just interpolate a smooth PSF model to the sub-pixel offset that is relevant. You have to build a pixel-convolved model for every sub-pixel offset. That is annoying! After figuring out how unlikely it is that our current software (not written by us) is doing this correctly, we returned to the issue of the high-order polynomial terms. In the morning, Robert da Silva (UCSC) gave a very nice talk about the observational consequences of the fact that galaxies form stars in bursts and therefore provide stochastic illumination.

2011-07-20

HST ACS distortions

Chatted with Holmes about his thesis chapter about ubercalibration; chatted with Bovy about his results on the Milky-Way disk as observed by SEGUE; filed tickets against the new web version of Astrometry.net. Late in the day, Lang and I got onto fitting the polynomial distortion terms in the HST ACS instrument; the official distortion map only goes to fourth order (if I remember correctly) but we see what look like systematic effects at higher orders.

2011-07-19

software instrumentation

In an action-packed day, I had several conversations about software as instrumentation. The first was with Rob Simcoe (MIT), after his excellent talk about his FIRE spectrograph: I asked him what aspects of the design were driven by the desire to make data reduction simpler. He claimed none, though I didn't entirely believe him. My point is that instruments should be designed to preserve or transmit information and we shouldn't be afraid to write complicated software. So making the scattered light low is good, because scattered light reduces signal-to-noise, but making the traces straight is bad, because that can be handled in software. Simcoe said that all significant costs were of the first kind and not the second. The second conversation about software was with J. D. Smith (Toledo) on the bus, who is responsible for some of the most important Spitzer-related projects. He, like me, would like to see changes in which there are ways to reward and recognize astronomers for the infrastructure they create.

The day also included work on binary black holes, M31 dust, and M31 proper motion.

2011-07-18

dust in M31, quasar redshifts

In the morning I wrote code to implement Dalcanton's M31 dust model, both for making fake data and for performing likelihood calculations given model parameters. By the time I was done she had moved on to a more sophisticated model! But I was pleased with my code; I think the simple model is strongly ruled out by the data.

In the afternoon, Hennawi, Tsalmantza, Maier, Myers, and I discussed using Tsalmantza and my HMF method to improve quasar redshift determination. Hennawi needs precise redshifts to do some sensitive IGM and proximity-effect measurements; when quasars don't have narrow lines in the visible, this is challenging. We are hoping to beat the current status quo. Gabriele Maier is making a training set for Tsalmantza and me to work with. Hennawi specified a very limited paper 1 for us to be working towards.

2011-07-17

chatting, dust in M31

I didn't do much research on Friday or the weekend, except for chats with Lang and Dalcanton about their respective technical issues. Dalcanton's is to infer the dust distribution in M31 from the colors and magnitudes of the red giants. It seems easy enough, but when you are doing dust, and you are letting it live in three dimensions, some nasty integrals appear; integrate numerically or use simple distributions with analytic results or approximations?

2011-07-14

are there galaxies behind M31?

The answer to this question is yes of course, but Lang and I spent a bit of time trying to find them, in the places where multi-epoch HST data overlap, for all the obvious astrometric reasons you might want to do that. The day also included a talk by Martin Elvis on mining asteroids (of all things!), and discussions with Tsalmantza and Adam Bolton about finding double-redshift objects in spectra.

2011-07-13

Intelligent Systems, day 3

In the morning we continued our discussions of multi-exposure imaging. I love the style of this computational imaging group: Work hard all day, but work equals sitting in the garden arguing! We particularly discussed what you could believe about a model made by forward-modeling through the PSF (that is, a deconvolution). My position is that because there are near-degeneracies in such modeling, you have to return a posterior probability distribution over deconvolved images (or probably a sampling of that); Fergus thought it might be possible to make an adaptive model complexity designed to maintain unimodality in the posterior PDF. Either way, representing the posterior PDF is not going to be trivial! We postponed all such issues to subsequent projects; we have a scope for a first paper that skirts such issues.

In the afternoon, Christopher Burger, Stefan Harmeling, and I discussed making probabilistic models of CCD bias, dark-current, flat, and read-noise frames, from a combination of zero, dark, flat, and science data. We decided to make some experiments with a laboratory CCD camera and, if they work, repeat them with archival HST data.

2011-07-12

Intelligent Systems, day 2

In the morning a group of us (Fergus, Schölkopf, Hirsch, Harmeling, myself) worked on the idea that Hirsch et al's lucky image deconvolution system could be used to combine and model any multi-exposure imaging. The system seems to work on pretty-much anything (as it should), although there are knobs to turn when it comes to engineering aspects (as in: batch vs online, how to initialize, multiplicative or additive update steps, parameterization of the image plane, etc). In the afternoon, Hirsch, Harmeling, and I specified the content of a publishable (in astronomy) paper making that point, with explicit demonstration of applicability to PanSTARRS, DES, and LSST. We spent a long time talking about what is the most assumption-free kind of model; Harmeling likes grids of pixels as models; I like grids of pixels plus floating Gaussians or delta-functions.

I also gave an informal seminar about work by Lang and me on Astrometry.net, web 2.0 stuff, the Open-Source Sky Survey, the Tractor, and other vapor-ware.

2011-07-11

Intelligent systems, day 1

I am spending three days visiting the group of Bernhard Schölkopf in Tübingen to discuss possible overlaps between his work on computational photography and image processing and current problems in astrophysics. Wow, there is a lot of overlap! In particular, his group has solved a problem I am interested in: How to combine lucky imaging data (fast images, variable PSF) to get information at the highest possible resolution and signal-to-noise. The current schemes usually throw out less-good data, rather than combine it optimally. Here in Tübingen, Hirsch et al have something close to an optimal system. It is described here.

Rob Fergus is also visiting at the same time. In the afternoon he gave a talk about his work on blind deconvolution of natural images, using the statistics of image gradients as a handle on the point-spread function. All of astronomy is a form of blind deconvolution: The instrument (plus atmosphere) convolves what we care about with an unknown point-spread function; we must simultaneously figure out that function and what we care about. This is an ill-posed problem we solve using heuristics (like find the stars, estimate the PSF using them, then look at everything else assuming that PSF), but something that truly does simultaneous inference will win in the end.

Tomorrow is supposed to be a hack day!

2011-07-10

IMF sandbox

On a train from the Alps to Heidelberg past the Bodensee I wrote a sandbox for making fake data for testing competing initial-mass-funciton fitting schemes. It passes my functional tests.

2011-07-07

the IMF, spectrographs

Among many other things today, the currently-in-Heidelberg part of the PHAT team (namely Dalcanton, Weisz, Lang, Rix, and Hogg) discussed the first baby steps towards using the young stellar clusters imaged in multiple bands in M31 to measure the stellar initial mass function and its dependence on cluster mass and clustocentric radius. We discussed all the limitations of working at catalog level and also the benefits, and decided to start out with that, with all approaches that are more precise and accurate postponed until we understand all the imaging details better. Lang and I were assigned the task of looking at the information content of the catalogs with respect to radial and mass dependences; Weisz is working on the machinery used for fitting.

Late in the day, Bolton and I discussed spectrograph modeling. My position is that we might be able to vastly improve the accuracy of spectroscopic reductions and high-level science results if we build hierarchical models of the spectrographs, using all the available science and calibration data to infer the parameters. In this vision, the analysis of every exposure would benefit as much as it could from the existence of all the other exposures. We decided that it is too early to embark on this idea; indeed, Bolton was perhaps suspicious that it was even a good idea in the first place.

2011-07-06

planet-sized eclipsing companions

Today Schiminovich showed up for a day in Heidelberg; he, Lang, and I spent some time discussing the scope and timing of our paper on transiting white dwarf stars in the GALEX time stream. We made plans for the figures (they must be readable after output by a black-and-white printer) and for the zeroth draft (it must be ready by July 20). We also worked on the title and abstract. We tentatively decided to put the word planet in the title, because that is the long-term goal of all this.

2011-07-05

classification, ubercalibration

In the morning I spoke with the PanSTARRS crew meeting at Heidelberg about hierarchical methods for star–galaxy separation. I showed the demo I made yesterday, and results obtained with Fadely and Willman on real data. In the afternoon, I spoke with Rory Holmes about our ubercalibration (self-calibration) theory paper, which he is writing up as a paper and as a PhD thesis chapter.

2011-07-04

PanSTARRS

I sat in on an all-day meeting in Heidelberg of the PanSTARRS Milky Way and Local Group structure working group. I learned a great deal, including that PanSTARRS is operating well and has tons of data. I spoke brashly about detecting sources at low signal-to-noise in multi-epoch imaging, and about star–galaxy separation, and I got assigned homework on both subjects. On the latter, I agreed to speak tomorrow, and I stayed up late preparing a quantitative toy demo to illustrate the principles behind hierarchical modeling.

2011-07-02

Poisson distribution

Upon arrival in Heidelberg today, Rix gave me homework regarding inferring the stellar mass function in a cluster given a sample of measured masses. This is a Poisson sampling problem, and there are two somewhat different formulations: In the first, you consider the independent probability for each of the observed masses (given a model of the distribution), and then multiply the product of those by a Poisson probability for the total observed number. In the second, you consider infinitesimal bins in mass, and product together the Poisson probabilities for the (binary) occupation of each of the bins. Late in the day I wrote a document showing that these two approaches are mathematically identical, at least as far as the likelihood function is concerned.

These models are for independently drawn (iid) mass samples. We would like to break the iid assumption, but non-independent samples are more complicated. There aren't good general ways to think about dependent data, and yet there is a vociferous literature about whether stellar masses in clusters are iid. The literature is loud but doesn't contain anything that I consider a "model", which for me is a quantitative specification of the likelihood (the probability of the data given model parameters).