2014-01-31

philosophy and physics

In a low-research day I had the great pleasure of participating in the "Colloquium" (like a research defense) for NYU Gallatin undergraduate Physics Major Daniel Seara, who did his research on philosophy and physics. The conversation was great; Seara understood some pretty damned difficult books, like Kant's Prolegomena and Hume's Treatise, neither of which I ever really grasped. For me the most interesting points included, for one, a discussion of the relationship of subjectivity to "flaws". Is human science subjective because humans are flawed? Or would it be subjective even if they were flawless reasoners? I think the subjectivity is pretty deep. For another, we discussed a bit why materialism (the belief in an external reality) was important to Marx and Lenin. Strange! Thanks and congratulations to Seara!

2014-01-30

rebuilding a model

Foreman-Mackey is back from a trip to Oxford and we met with Suraj Swaminathan (NYU Data Science) and Fadely to discuss the possibility that Swaminathan could implement the plans we have for a pixel-level data-driven model for the Kepler imaging. We debated the question of fixing or rebuilding (from scratch) Foreman-Mackey's code (written to make the demo in our whitepaper). Since the model is linear in everything, we decided that rebuilding from scratch would be straightforward and probably more educationally valuable.

2014-01-29

sampling in galaxy shape space

Lang and I pair-coded (although really Lang coded and I stared at the terminal) two extensions for The Tractor today, both related to our hierarchical weak lensing project (now called MBI) with Bard and Marshall and Schneider and Dawson and others. In the first, we extended our mixture of Gaussians approximations to standard galaxy profile to general Sérsic indices; this requires some optimizations, running now in the background. In the second, we built something (that Lang wants to call "Trogdor the Burninator") that very reliably initializes emcee for industrial-scale sampling in galaxy morphological parameters for millions of galaxies. We start with an optimization. A third issue, which we did not address, is that we have four qualitatively different galaxy models (exp, dev, composite exp+dev, and Sérsic) and we would like, ideally, to sample in all four, but this is not totally trivial, given that either we have to make reversible jumps or else compute fully marginalized likelihoods. Right now we just do an early model selection (by BIC) and go with the selected model only. That's a hack, to be sure.

2014-01-28

sick day

Didn't do much, other than argue by email with Bovy about Milky Way tidal-stream morphologies.

2014-01-27

observing and fitting streams of stars

I spent the day up at Columbia, visiting Johnston's group and also Geha and Bonaca came down from Yale. We discussed all things tidal streams, including recent papers by Sanders and by Bovy (streams are the new exoplanets?). We noticed that some things in those papers might be in one or two old papers by Johnston and by Helmi. Will do due diligence on that soon. One thing I don't like about these recent papers is that they are very tied to the action–angle formalism, which requires integrable potentials, which puts incredibly strong, unrealistic, global constraints on what can be fit.

We made a more concrete, more specific outline for Price-Whelan's current paper, which is the first in the business to do (what I consider) proper treatment of individual-star uncertainties, and also which is not tied to integrability. We also looked at some likelihood issues coming from Bonaca and Küpper. One thing that remains puzzling to me after all the discussion is why some tidal-disruption stream models make something that looks like a "feathered" (or zig-zagging) stream and some don't. This issue might be partially resolved by whether the outputs are shown as a collection of points or a density distribution, since the latter de-emphasizes the feathering. We started the discussion about what it would take to observe that feathering in full detail; Geha has lots of telescope time!

2014-01-24

importance sampling for weak lensing?

I did my homework for Marshall, writing up six dense pages on my thoughts about probabilistic inference of shear from galaxy observations. It focuses on importance-sampling approaches to weak lensing, a la my eccentricity paper. This new weak-lensing rant is the Nth document like this I have written; I have a whole github repository for unusable documents about weak gravitational lensing. The key thing I figured out is that you don't just need a prior over galaxy shapes (as I blogged the other day), you need a prior over all galaxy properties. Part of the reason is that the likelihood includes covariances between the shape and other properties. Part of the reason is that our prior also includes such covariances, at least if it is going to represent our actual prior knowledge. We are a long way from having a method, of course (did I use the word "unusable" above?) but if we can show that you can do this with a small number of samples per galaxy, it just might work.

2014-01-23

likelihood functions

In a low-research day, I discussed noise models with Fadely, and discussed marginalized likelihoods with Marshall. On the former, we are trying to build the HST WFC3 PSF model using a somewhat justified noise model, which is neither unweighted least squares nor the Very Dangerous Move of setting the least-square weights based on the observed (data) intensity (standard practice for lots of optical astronomy, including many of my own SDSS papers). On the latter, Marshall asked me to write up my thoughts and concerns about hierarchical inference via importance sampling in the weak-lensing context. I agreed but didn't get to writing.

2014-01-22

weak lensing

I had two telecons today. The first was with Phil Marshall, to discuss my Atlas project, which needs some love and attention. Phil had lots of good ideas for improvements that make it more fun and more useful. In the second I joined a meeting with Marshall's GREAT3 team, which is getting ready to compete in the upcoming weak lensing competition. The team includes Lang, who is operating the farm machinery required. We discussed the importance of having an explicit (and good) prior over galaxy shapes, something that team members Schneider (LLNL) and Dawson (LLNL) are working on at the theory level. We also discussed how to parameterize ellipticity. My position is: If you are working at catalog level (which might be a mistake), you want to work with the point estimates (catalog entries) that are closest to having a Gaussian likelihood! This, if you trace it down, ends up being a statement about ellipticity parameterization. All that said, I expect that all methods working at catalog level are (in the end) doomed to failure. The only things I can see working at catalog level are actually more computationally intensive than working at image (pixel) level.

2014-01-21

hierarchical inference of exoplanet densities

I spent science time today and yesterday reading carefully a nice new manuscript by Leslie Rogers (Caltech) about the densities of exoplanets as a function of mass. She can show with confidence that super-Earths larger in radius than about 1.6 Earth Radii are not rocky; they must have volatiles (water or worse) contributing to their effective (as measured by transits in the optical) radii. The data on these tiny planets (especially the masses) are extremely noisy, so she used hierarchical inference to get the noise-deconvolved density distribution, following the advice in this paper. The SAMSI program Eric Ford organized last summer is definitely getting me some citations for that paper!

In other news, Fadely showed me awesome results from his data-driven model for the HST WFC3 point-spread function. Does anyone have any model (other than TinyTim) against which we can compare?

2014-01-17

writing

I spent my research time reading and writing in Sanderson's nearly finished paper on inferring the Milky Way potential.

2014-01-16

likelihood functions for Milky Way streams

On the way up to Columbia to chat with Johnston, Sanderson and I discussed the probabilistic interpretation of our information-theory approach to Milky Way streams. I argued that we should frame this discussion entirely in terms of likelihood functions, and give up on posterior pdfs. The main reason is that, in my view, "information" is a property of a likelihood function, really. It is the likelihood that moves information from the data to the quantities of interest. Besides, our job as scientists is to provide likelihood functions!

At Columbia, Price-Whelan showed me cuts through the likelihood function for our generative probabilistic model of stream stars. The MCMC sampling is (not surprisingly) turning out to be hard; one of my go-to moves in this case is to check that the likelihood function is smooth and peaked where it should be (and not elsewhere as well). All tests pass, so I think we are down to improving the initialization and burn-in of emcee.

2014-01-15

Gaussian Processes for quasars

Undergraduate researcher David Mykytyn and I decided to work through the Rasmussen & Williams book on Gaussian Processes, and complete our exploratory project on fitting time-domain multi-band data on quasars in SDSS Stripe 82 data. We have made up a crazy covariance function, which accounts for time lags, differences in amplitude, and different time-scales in different bands; we are not sure if we have a kernel that is permitted to exist by the rules of mathematics. (An acceptable kernel is one that can only make positive definite covariance matrices.) One amusing thing we learned is that the industry-leading way to determine whether a kernel is permitted is to test the hell out of it numerically; proofs are much, much harder.

2014-01-14

geometric-path Monte Carlo

I spent my research time today working through the final draft of Hou's paper on computing the fully marginalized likelihood (the FML, the Bayes integral, or the evidence integral) using a geometric-path Monte Carlo method we invented-ish with Goodman. The idea is simple: The BIC is an approximation to the FML that replaces the likelihood with a Gaussian approximation. You can build an intermediate approximation to the FML that replaces the likelihood with the geometric mean of a Gaussian and the true likelihood. As you can imagine, there is a whole family of such approximations, going from the pure Gaussian approximation to the true likelihood. We travel along this "geometric path" from the Gaussian to the true likelihood and end up with an estimate of the FML. The method is fast and effective in exoplanet contexts (as we show) but is also unbiased and also produces an uncertainty estimate on the FML computation it produces.

2014-01-13

model-free or badly modeled?

Robyn Sanderson (Groningen) is in town to finish our paper (with Helmi) on inferring gravitational potentials using information theory. Our method looks for generic clustering in action-space; it is "model-free" in the sense that it doesn't specify a model; it doesn't even assume the stars are on streams (and it doesn't need to know how many streams, or which stars are in which streams, or whether the streams are really shells, etc). The flip side of this is that it must, therefore, be (implicitly) a bad model.

One way of looking at it is that our method provides an estimator (a point estimate) of the potential. That estimator is either efficient or not. If it is efficient, it is the maximum-likelihood estimator for some model that we have never specified and therefore don't know. That can't be good! Indeed we can see that the estimator is biased, and the bias is bigger than the variance.

However, all that said, the point of this project is to explore the possibility that phase-space is structured, and be agnostic about that structure. This is complementary to my usual kinds of approaches, that make explicit assumptions about the causal or generative process of any structure we are using for inference. The method is (fairly) fast and seems to work, so we are writing.

2014-01-10

2014-01-09

AAS 223, day 4; AAS hack day

The day started with me giving a plenary on engineering (principally software engineering related to data analysis and target selection). It was admirably live-blogged by the people, so check twitter if you want a summary. I have put my slides up here.

Immediately after my talk it was the start of the second annual AAS Hack Day. We ran it similarly to last year: Pitch, self-assemble, hack, hack, eat, hack, hack, report. The results were astounding. Here are just a few highlights; this is not the full list of completed hack projects:

mpld3
Vanderplas (UW) has been working on making a javascript-d3 back-end for matplotlib, so that python matplotlib code can be made to create beautiful interactive plots in the browser to be embedded in web pages and other online content. At Hack Day, Vanderplas implemented "tool tips" for matplotlib plots and propagated them into the d3 backend. In a similar parallel hack, Meisner (Harvard) implemented "tool tips" functionality for pixels in the popular IDL image viewer called ATV.
astrotweeps
Schwamb (Taiwan) and a team started up a twitter account (and associated live-updating web page and facebook page) that will follow, for each week of the year, one astronomer, somewhere in the world. Email Schwamb if you want to take it up for a week sometime this year!
dumb-aas
Using statistics of n-grams in the AAS contribution titles, Price-Whelan made a web service that creates new, fake titles. The best was "The metallicity distribution of astronomy education". After Hack Day, Foreman-Mackey made a twitter account that does the same for arXiv papers.
d3po in presentations
Williams (Berkeley) mashed up d3po plotting with javascript-based presentation software to make useful code for putting interactive figures into seminar slides.
startorialist and social media
Rice (CUNY) and Corrales (Columbia) expanded the startorialist web site, making it play well with twitter, and (gasp) tumblr. Apparently twitter is so five minutes ago.
WWT input-image stretches
For the second year in a row, Hack Day led to a feature implementation in Microsoft World-Wide Telescope, by Jonathan Fay (Microsoft). Teuben (Maryland) asked for new kinds of control over brightness and contrast on images viewed in WWT; Fay made him sit down and they put it right in there. Apparently it will ship in the next version!
gender in question period
Before the AAS meeting even started, Davenport (UW) set up some systems by which meeting attendees could report back to him data about who is speaking and who is asking questions at AAS meeting sessions. He gathered together a data analysis team who did a great job from beginning to end. It looks like they found something: Although the AAS speakers are made up of the same 60-40-ish male-to-female ratio of the entire conference attendance, the question askers are substantially more likely to be male. Next: Why?
asteroids in LSST
Juric (LSST) gathered together a team to look at discovery of fast-moving sources in LSST. He explained that right now the asteroids put such a strong constraint on imaging survey cadence, any improvement in orbit determination methods could vastly improve the efficiency of the survey. The team made some progress on some brute-force methods for "linking up" disjoint observations of fast-moving sources, constrained by Solar-System dynamics
AAS routing
Foreman-Mackey, with help from several hackers including Gregerson (Utah) and Malyshev (Stanford), built an interface to the AAS meeting program that takes as input some text (like an abstract from one of your recent papers) and returns a list of the AAS meeting contributions to which you are most likely to want to go. We got tentative agreement from Kevin Marvel (AAS) that, under conditions, the AAS might add the functionality to the program next meeting!

What a day! Thanks very much to Microsoft and Northrop Grumman for sponsorship, and Kelle Cruz (CUNY) and Meg Schwamb for organizing!

2014-01-08

AAS 223, day 3

I spent lunch talking with Leonidas Moustakas (JPL) about possible cross-cutting work on data analysis and information theory for large scientific projects, in astrophysics and beyond. He imagines building a group at JPL that can make all the projects at the lab better. I followed this conversation by making slides from scratch for my plenary talk tomorrow; I am talking about things I have never really put together into one talk before. I think I left it a bit late! I plan to argue that if we are going to get the most out of our data, we have to respect information theory, which means building and propagating likelihood functions. Unfortunately, there are many places where this might be intractable.

2014-01-07

AAS 223, day 2

I arrived (delayed) at AAS today, just in time to give my short talk on code sharing in the code-sharing panel (slides here) chaired by Alice Allen (builder of the ASCL). In that session, Tollerud (Yale) gave a nice talk about how development started and is sustained on astropy.

So many things came up in the hearty, long discussion (a rare treat at AAS sessions, which are often passive), I can't do justice. However, from the audience, Archibald (McGill) and others weighed in the trade-offs between releasing code as-is and releasing useful code; the latter is often time-consuming and isn't necessarily worth the time. The panelists were on the side of releasing code even if it is not useful but there is no doubt that trade-offs exist. Ferguson (STScI) and others elucidated a tension between Tollerud's point that it was good, in astropy, to get people to join forces and write code with consistent, reusable structure, and my point that everyone should just put their code out there no matter what. We resolved it by noting that you can't get a slick project together like astropy if you haven't already seen people put their code out there warts and all. Indeed, astropy is built from refactorings of various less-than-perfect original code releases. Lovegrove (UCSC) and others discussed export restrictions; there was general unhappiness with their all-encompassing vagueness.

At the end of the session, Hanisch (STScI) suggested that the AAS Working Group on Astronomical Software should maybe become a division and expand its scope, to having meetings and sessions, and providing training, and policy, and so on. That's a great idea. Cruz (CUNY) thinks the first event perhaps should be something on unit testing!

2014-01-06

sharing your code

I worked on slides for my talk at the code-sharing session at AAS tomorrow. I am going to talk about ethical reasons to share your code, and then focus on costs and benefits. Among the costs, I think the biggest real cost is answering email and support requests. I think the most overblown cost is the risk of getting scooped. When have we ever seen someone get scooped because they shared their code? Usually code sharing is met with deafening indifference! Among the benefits, I think the biggest real benefit is the goodwill and visibility it creates in the community, which leads to citations, influence, and collaboration. Will post the slides tomorrow when they are done (but of course they are visible on github at all times in all stages of development, people!).

2014-01-04

engineering and big projects

I worked yesterday and today on my AAS 223 plenary talk about engineering and large astrophysics projects. It took me ages to figure out what my take-home points should be. Right now I am at: (1) Homogeneous and uniform samples are impossible, unnecessary, and harmful. (2) Calibration programs reduce your overall measurement accuracy. (3) Propagation of uncertainty is very difficult. (4) There is an open challenge of making precise measurements efficiently while still leaving open "discovery space" for new things.

2014-01-02

hack day prep

The only research work I did today was a bit of preparation for the AAS Hack Day, which happens in one week's time at the AAS meeting near Washington DC. I hope it goes as well as last time.