2012-07-31

a causal model for Milky Way disk stars

Bovy, Rix, Foreman-Mackey and I gathered in the "vdB lounge" of the MPIA to discuss a graphical model for stars in the Milky Way disk, in part to guide our thinking about projects and in part to make a possible illustration for a review that Bovy and Rix are writing. There was some argument about where to put the extinction, and what to consider the observables, with Bovy and Rix making the case that since they are given (by observers) temperature and gravity, these should be seen as observables, and me arguing that since these are coming from medium-resolution spectra, the spectra should be seen as the observables. This difference doesn't matter much to the inference, but part of my prejudice comes from the idea that graphical models can be thought of as causal models. Therefore I want my graphical model to represent the causal structure of the problem where possible.

Along similar lines I discussed with Beth Biller (MPIA) her projects to do exoplanet direct imaging campaign simulations based on data-driven models of the exoplanet population. Here one of the interesting things is that non-detections can be just as constraining as detections, and she has a nice framework for making all that happen. This is strongly related to exoplanet population modeling under consideration by Foreman-Mackey and myself.

2012-07-30

modeling multi-resolution data sets

I worked today on our Herschel side project, in which Lang and I attempt to model the dust emission in the M31 disk at the resolution of the Herschel 100-micron images, but using information from all the bands out to 500 microns, which are much lower in resolution. It all seems to work! But we need ways to test the results. Ideas that have come up include: Check that we reproduce the extinction map. Check that we predict well 70-micron and 1-mm data. Check that we don't find strong A_V vs temperature variations. Any other ideas? We get good residuals internally to Herschel, but obviously external tests are much more valuable.

2012-07-27

seminar

I gave my Hauskolloquium at MPIA today. I think I ended up convincing everyone to just coadd their damned data, even though my intention was just the opposite!

2012-07-26

non-negative image priors

Foreman-Mackey and I sprinted on The Thresher in the hopes of giving me something to say in my Hauskolloquium here at MPIA tomorrow. We figured out something (already known) about non-negative priors: They can lead to severe biases. Consider, for example, PSF fitting with non-negative priors. If there is any noise in the system at the outkirts of the data used to find or fit the PSF, a system with non-negative priors can only capture the positive excursions; it can't also capture the negative excursions. This leads to two kinds of biases: One is that the PSF inferred with non-negative priors is always fatter than the PSF you would obtain by normal linear least squares. The other is that the PSF never really goes exactly to zero in the wings; it just can't. These problems arise, in my view, because we are doing point estimation; if we were carrying forward a full posterior PDF for the PSF we would be fine, but we just don't yet know how to do that! We came up with short-term hacks to deal with our problems and get me ready for my talk tomorrow, which is about extracting information from collections of images.

2012-07-25

PCA, hst pixels

Long discussions today between Sandstrom (MPIA), Groves (MPIA), Kapala (MPIA), Weisz (UW) and me about inferring the total mean intensity from a patch of the sky from the pixel photon counts in a large patch of HST imaging. I think the problem is hard enough—given zodiacal, Galactic, and extragalactic backgrounds (well the first two I guess are foregrounds)—without the additional problem that we don't understand the HST pipeline processing. I got even more upset about the idea (encoded in DrizzlePac) that images are grids of buckets of photons; this view leads to ideas like "transformations that preserve flux" (when in fact transformations should preserve intensity), and makes all interpretation extremely sensitive to pixel solid angles, which may or may not be properly multiplied in (with an intensity image, failure to multiply in would be failure to calibrate; not so when you are just counting photons).

Collaborator Tsalmantza was shocked to hear me talking about PCA with the exoplanet direct imaging group (headed by Brandner) since I have devoted quite a bit of my last few years attacking PCA. I guess I now have to agree that PCA is sometimes useful. It is never Mr Right but it is sometimes Mr Right Now. Everyone in the direct imaging world is talking PCA right now, I think in part because we have all reached the limit of what means can teach us. In Brandner's group meeting we also talked about finding young M stars, which are great exoplanet search targets because they are not so luminous and when young the planets might still be hot. Young M stars have activity that give them line, UV, and X-ray emission; the planets might too; that seems like an interesting thing to think about!

2012-07-24

writing, Thresher, PanSTARRS, HST

I didn't get much done today after a nearly-all-nighter helping with the Fergus paper on high-contrast imaging and various papers by Bovy and Bolton. However, I did pair-code a bit of The Thresher with Foreman-Mackey; we showed that we can take a small number of PanSTARRS cutouts with different seeing and make a coadded version that is higher in signal-to-noise and resolution than any of the original images from which it is built. (Phil Marshall made the image below, and also noted that we are "over-deconvolving" which is a problem I have to agree.) That could be useful to the survey team. Late in the day, Maria Kapala and I beat our heads against the HST data reduction pipelines used in the HST Archive and the PHAT HLSP Archive. There seem to be inconsistencies in the image pixel histograms that are very difficult to explain by any reasonable model of what these pipelines do.



2012-07-23

rigid disk kinematics, habitability

After a morning of beating our heads against intransigent code, Foreman-Mackey and I spoke with Rix and Lisa Kaltenegger (MPIA) about their project to do a more responsible job of assigning probabilities of habitability to noisily observed tiny planets. The issues include: The observational uncertainties can be bad when projected into the quantities of interest (insolation and size). Some aspects of the problem are essentially unobserved (albedo and existence of an atmosphere and rockiness). Some planets have mass and radius observed, some have only one or the other. We discussed in general, and Foreman-Mackey is going to think about whether to take it on.

At lunch we discussed Bovy's excellent (not yet released) manuscript on the Milky Way disk using SDSS-III APOGEE, which is based on measurements of radial velocities all over the whole disk (APOGEE is an infrared spectroscopic survey, so it sees through a lot of the dust). Even marginalizing out distances to the stars, he can obtain a few-percent measurement of the rotation curve! One fundamental and obvious yet surprising point that came up is that in radial velocity the rotation of a perfectly cold, rigid disk is unobservable! Several of us laughed to realize we had never thought of that previously. Bovy's method therefore relies heavily on the asymmetric drift model—the fact that the lag of a population relative to the cold-population rotation curve is a function of velocity dispersion. But since he can see most of the disk, and since he is marginalizing out troublesome distance estimates (and also giant/dwarf classification), his results are the best ever. We discussed the comparison with future maser studies but I think that the much more precise maser measurements might not help that much: The masers and other young stars have a lag that is not easily understood in terms of the theory of asymmetric drift, as we showed here and here. Rix pointed out that it shouldn't be easily understood, because the masers aren't expected to be an angle-mixed population.

2012-07-20

PHAT Camp, day five

I worked on wording, equations, and pseudo-code for completeness inclusion in the various PHAT projects underway. Gordon and Weisz successfully ran the PDMF fitting code from end-to-end on both simulated and real clusters. Also Maria Kapala (MPIA) came by and we looked at the pixel histogram in various PHAT UV HST images of the M31 disk. She is looking to see if it is possible to measure the light from the unresolved and undetected stars in the UV as a shift in the pixel histogram in the UV images. The histograms were nearly incomprehensible, making me wonder if MultiDrizzle is messing up the data. Of course I have never been fond of Drizzle-like algorithms, because they treat the data as overlapping square bins of photons and not samples of a pixel-convolved-psf-convolved intensity field. More on that here next week (for Brewer's sake if no-one else's). We may try to chase down the reasons for the pixel histogram issues next week.

2012-07-19

PHAT Camp, day four; optical interferometry

Foreman-Mackey and I worked out the differences between, as it were, the Bovy view of the world and the Foreman-Mackey view of the world (well, the world of variable-rate Poisson processes subject to censoring). Of course the two views are mathematically identical in the end!

As work continued in PHAT land, I escaped for an hour to see an outrageous talk by Andy Skemer (Arizona) on high-resolution imaging with the LBT. He showed an absolutely beautiful interferometric image at four microns of a star made with the LMIRcam interferometer. It looked like what we see in textbooks! He also showed a video of the image over time, showing that the phase difference between the telescope's two mirrors varies stochastically with time, probably because of atmospheric variations. Foreman-Mackey and I discussed whether we could model these data—the individual-telescope AO PSF is so stable, the different images have only one degree of freedom, which is the unknown phase difference between mirrors. That's a one-parameter family of PSFs and 0.04 arcsec resolution in the good direction.

2012-07-18

PHAT Camp, day three

I worked a bit more on the math behind variable-rate Poisson processes with censoring; the final tidying up on the work we have done over the last few days. In the meantime, Weisz and Gordon worked on getting code to infer the present-day mass function in various M31 stellar clusters. So all goes according to plan. In the afternoon, Christy Tremonti entertained Hennawi's group meeting with some very strange, very compact, very energetic post-starburst galaxies. They seem to have enormous ultraviolet fluxes but no emission lines. How is that possible?

2012-07-17

PHAT Camp, day two; exoplanet pops

Late in the day I attended a short talk by Mordasini (MPIA) at the currently ongoing heidelberg Exoplanet meeting about exoplanet population synthesis. Mordasini is trying to go from initial conditions inspired by protoplanetary disk observations through to late-time planets in composition, size, mass, orbital semi-major axis, and so on. It is early days, but his talk suggested that a jointly theory and data-driven model could be very effective at explaining and predicting data.

In the rest of the day, the PHAT team plus Foreman-Mackey and Bovy argued about completeness. We finally resolved our issues by realizing that (from a graphical model perspective) the data we have are conditioned on being included in the sample and this is how completeness is properly included probabilistically. This is equivalent to a distribution-function or density-modeling perspective, in which the observed data are drawn from an error-convolved, completeness censored distribution. It is all obvious in retrospect, but there are lots of wrong ways to include the completeness function!

2012-07-16

PHAT Camp, day one

A week-long sprint on the Panchromatic HST Andromeda Treasury project started today, with Dan Weisz, Morgan Fouesneau, Cliff Johnson, Karl Gordon, and of course PI Dalcanton in Heidelberg, along with semi-locals me, Rix, and Foreman-Mackey. Actually, you have to count Dalcanton as a quasi-local too.) We started by laying down the law: No-one leaves Heidelberg without the figures for a paper on the present-day mass function in young stellar clusters. We then proceeded to work through the status and conceptual issues, with reviews of work by Gordon on likelihood functions (think converting photometry to mass estimates) and Weisz on hierarchical methods for getting the PDMF out of noisy and incomplete data. A fight started to break out about completeness functions; we are going to have to resolve this tomorrow. The key thing that we at PHAT Camp will not do is upweight incomplete data by the inverse of the completeness; that absurdity weights arbitrarily highly the least likely data points! But we still don't agree on how the completeness is best included in the fits. This should be a matter of probability theory—not taste—but we are still a bit confused (myself included). We all started writing code for the various parts of the project.

In a side discussion, Dalcanton, Rix, Johnson, and I discussed evolutionary models for fading, dissolving, evaporating, and expanding stellar clusters, bound and unbound. Johnson began the discussion with a very nice review of the current literature and the controversies therein. We came up with a continuity equation formalism and assigned Johnson the task of making a toy model. My student Jiang is doing similar things with galaxy evolution back in NYC.

2012-07-13

big models

What a day! There were talks of various lengths by Valluri (Michigan), Thalmann (Amsterdam), Morganson, Bañados, Deacon, and Goldman (all MPIA). So much to report that I will report nothing except that PanSTARRS calibration is under control; it is only data access that needs work, and high-dynamic range imaging efforts at MPIA could benefit from the stuff Fergus and I are working on!

At lunch Finkbeiner (Harvard) showed up, and we discussed how to model the dust in the Milky Way in two (projected) and three dimensions. I showed him some technology to deal with the more parameters than data non-problem, including regularizations that really only act when the data are failing to inform. I stuck with frequentist approaches, because we still don't understand how to do exceedingly enormous non-parameteric (think: infinitely parameterized) problems with full posterior output. There is a huge literature on this, so maybe one day we will have a breakthrough. I also stuck with methods that are fast, because if you want to build models with billions of parameters (and we do), you care.

2012-07-12

discrete models of AGN

Elisabeta Lusso (MPIA), Joe Hennawi (MPIA), and X. Prochaska (UCSC) discussed physical models of AGN with me at lunch today. They are modeling each AGN as a linear combination of warm/cold dust emission, a dusty torus, a stellar population, and an accretion disk, attenuated by two extinctions, one for the stars and one for the AGN. For each of the four physical components they have a discrete set of possible models (a set of "archetypes" in my lingo). So the model parameters are four amplitudes, two extinction amplitudes, and four integers. But if they want to marginalize out most of this—and they do—they need to put priors over those integers. That is, they need four huge lists of probabilities, each of which sums to unity. I noted to them that we have done this very thing in our star–galaxy paper and told them I would write down some math in the next week or two. They want to run on every known AGN, using every extant data set, so that's my kind of problem!

2012-07-11

dotastronomy, day three

Today Pepe and Goodman (both Harvard) gave very nice talks about the universe of seamless-astronomy things they work on, including the ADS All-Sky Survey (on which I am a collaborator). Both of them did some nice advertising of Astrometry.net too, which pleased me. In the afternoon unconference session, a group of us got the idea of starting a astronomical data sciences consulting company, which would provide vertically integrated services for people with large data problems, and also pay salaries to my people (read: dotastronomy participants and the like-minded).

2012-07-10

dotastronomy, day two

Today was Hack Day at dotastronomy. The hack day started with a session in which various people proposed hacks, in part to advertise and in part to entice people in the audience with coding (or other) skills to participate. I proposed making a bot to interact with the users on the Planet Hunters forums, helping them to discuss and analyze the variable stars and transits they are finding. Foreman-Mackey proposed writing a javascript numerical optimization library for inference in the browser. Marshall proposed making quantitative measurements of images (that is, modeling) in the browser. These three hacks are related, of course, because we are thinking about very capable bots!

When we got started, I found that many people were interested in the bot concept, especially Price-Whelan, Beaumont, Lintott, and Schwamb. They got started exercising the just-started (by Lintott and Simpson) Planet Hunters API, with help from people at Adler (in real time), sending and receiving JSON. I very quietly backed away from the table, and they executed my hack with absolutely no involvement from me whatsoever [added later: I got a prize in the hack prizes for this life hack: Getting others to hack on my behalf!]. While they made a bot, I went to help out with (read: gaze in awe at) Foreman-Mackey's hack.

Great success all around: Marshall and Kapadia got image fitting working in the browser (and now Kapadia is contemplating porting simplexy to javascript). Foreman-Mackey made this demo, which does all its fitting in the browser; click through to the code if you need a javascript numerical optimizer. The bot, called ZooniBot, started commenting on the forums and by the end of the day had three logos (one drawn, unsolicited, by the child of a Planet Hunters user) and a bunch of online followers and direct messages!

I love Hack Day.

2012-07-09

dotastronomy, day one

There were many good contributions in both the conference and the unconference parts on the first day of dotastronomy. One standout was Pössel's demo of the planetarium show in the dome of the Haus der Astronomie, where the meeting is being held. The projectors are digital and yet beautiful, and exceedingly high in dynamic range (much higher than any digital projector I know). Another standout was an unconference session on javascript, during which we argued about case studies; and a session on visualizaton, where we group-critiqued specific user-brought figures. The latter should be a part of every conference: Every conference should have a session in which the group gives feedback on particular figures and visualizations; it is a science betterizer. Robataille gave a nice talk about the astropy project (although he doesn't yet have a good answer to the question of how we should ensure that the contributors get citations for their contributions; that's a great problem to solve). Muench talked about ways to make data used in papers discoverable by others in the open-science Comedy of the Commons towards which we (all?) are working.

2012-07-06

lensing success

By mid-day, Marshall and I succeeded in synthesizing a PanSTARRS image of a gravitational lens, using the Tractor. That constituted success. Then I took off for the Austrian border (for the hiking, not the anonymous banking).

2012-07-05

lensing and image synthesis

After Marshall gave an MPIA Galaxy Coffee talk about his work on finding Milky-Way-like halos from the output of n-body simulations—a method of using LambdaCDM as a prior on halo properties—we got to work making the gravitational lensing version of the Tractor. We struggled with Python and inheritance, but we got very close. We are trying to synthesize PanSTARRS images before the week is out. On the side, Price-Whelan worked to refactor and resurrect his work on our paper tentatively titled Dear PanSTARRS: Don't co-add your data. Love, Astrometry.net.

2012-07-04

not the h-word!

In the morning, Sarah Ballard (Harvard) gave a nice talk about finding exoplanets with Kepler plus follow-up. She effectively noted that the candidates most in need of follow-up are Jupiter-mass and Earth-mass, the former because of confusion from blended eclipsing binaries (which look very similar in time history) and the latter because of confusion from more massive planets, which (either from blending or just bad luck) at low signal-to-noise can look like Earth-mass planets. She said that there are very few false positives among the Kepler KOIs at other masses, which is what I have been hearing from other sources (think Johnson and Kjeldsen, recently). She showed very nice models of some transit-timing variations that can be explained by multiple planets in resonance. She also said a lot about habitability while only barely mentioning that word, which was appropriate, given that what is called by the h-word is often just about mass, radius, and mean intercepted radiation from the parent star.

In the rest of the day I discussed probabilistic photometric distances with Rix and Foreman-Mackey, discussed the intergalactic medium and its detectability in different regimes with Hennawi's group, discussed Herschel data with Lang, and watched Foreman-Mackey have an epiphany about the relationships between hierarchical inference, graphical models, and the expectation-maximization algorithm.

2012-07-03

non-parametric Tractor

Lang and I pair-coded (him in Princeton, me in an office in Heidelberg with Foreman-Mackey, Marshall, and Price-Whelan forced to listen in) a version of the Tractor to model Herschel maps of warm dust in M31. Our model is a thermal emission model on a grid of point sources, smoothed with the (very variable with wavelength) point-spread function. We started by trying to do this by just making a huge grid of Tractor point sources with fixed positions and leaving everything up to the Tractor's excellent optimizer, but no way. So in the evening, Lang made a new Tractor model which is a non-parametric (think: lots of parameters) thermally emitting dust blanket. It worked! Now we just have to figure out how to make it fast enough that we can run it on large amounts of sky in practice. The idea of all of this is to get a Herschel-inferred dust map in M31 at the resolution of Herschel's best data, not smoothed to the resolution of Herschel's worst data.

At the other desks in the Camp Hogg office, Price-Whelan worked on object detection, photometry, and astrometry in multi-epoch data with variable conditions, Marshall worked on modeling lenses in PanSTARRS, and Foreman-Mackey worked on making The Thresher run on full-frame images, with and without drifting centroids, and on PanSTARRS data (to give something to our hosts here at MPIA).

2012-07-02

PanSTARRS

On the first work day in Heidelberg, Marshall and I worked on comparing SExtractor and PanSTARRS catalog outputs on PanSTARRS data, with the hope of cutting down the list of candidates for our automated lens finding project. Foreman-Mackey worked on getting the same PanSTARRS data to run through The Thresher; it isn't in the lucky-imaging (think tens of thousands of images) regime, but we think The Thresher still ought to co-add the data better than the survey pipeline. That makes me realize we should run it on SDSS Stripe 82 too.