Inspired in part by the conversation with Juric yesterday, I worked on finishing up the old, old fitting-a-line document today. I hope to do more on the airplane.
2010-06-30
2010-06-29
where's the dust?
Mario Juric (Harvard) dropped by for an hour in the afternoon to discuss inference of the full three-dimensional dust distribution in the Milky Way. He is taking off from ideas of Ivezic (Washington) and Finkbeiner (Harvard) about these issues, but trying to do the inference by generating the data. This is music to my ears. We talked about how to make a tractable first step.
2010-06-28
deconvolution in the extreme
I spent the weekend and today working on my hierarchical exoplanet papers. Today, Bovy pointed out that they are really just more general versions of our extreme deconvolution method for forward-modeling noisy (and heteroscedastic) data. (On that note, Bovy just resubmitted the paper to JOAS.) Because I care—by assumption—only about the distribution function, and not about improving the individual-exoplanet measurements, I am not doing anything properly hierarchical. I think he is right; that adjusts my introduction.
Zolotov and I discussed the issue that any cusp in the projected phase space distribution function will have finite thickness, which smooths out the infinities. We talked about methods for computing the cusp function safely. Power laws are difficult.
2010-06-25
SciCoder, hierarchy
In the afternoon I dropped by Muna's SciCoder workshop today to give a short spiel about testing, making my pitch that scientific results produce the best end-to-end tests of scientific software. In the morning I worked out a bit more about my hierarchical Bayes methods for exoplanet studies. I am finding the math to be non-trivial, although in the end I want—and expect to have—something simple to explain and simple to execute. I think I will pitch it to people who have, to start, a sampling of exoplanet parameters for each of a set of exoplanets. That is, the pitch will be to Bayesians.
2010-06-24
Johnston
Kathryn Johnston (Columbia) spent the day at NYU, and we talked about many things. We both agree that streams—cold structures in phase space—are the future for constraining the Milky Way potential. We didn't accomplish much, but she encouraged me to finish up our various cold-structure projects!
2010-06-23
National Science Foundation
Does writing an annual report on a grant count as research? I hope so, because that's most of what I did today. I also figured out that the paper I am writing with Myers should really be two papers; we will see if Myers agrees when I break that to him in Heidelberg. I split the minimal text I have now.
2010-06-22
cusps and papers
I worked briefly with Zolotov on her cusp project, debugging code to generate cusp-like density functions. After that I had a long discussion with Bovy about quasar target selection, with him showing me a very good framework for incorporating variability information (a generative model for quasar lightcurves, really), and with me arguing that we should write a quasar catalog paper. If we do that, then the SDSS-III target selection paper could just be a reference to our quasar catalog paper.
2010-06-21
epsilon
Aside from a few conversations I did nothing today.
2010-06-18
hierarchical inclinations, cusps
I spent a bit of time working on hierarchical (read: Bayesian) methods for inferring distributions in the face of significant nuisance parameters, with the goal of getting the exoplanet mass distribution without knowing any inclinations. What we are doing also applies to binary stars and stellar rotation. In the afternoon, Zolotov and I spent some time designing her project to look at cusps in the distribution of stars in the Milky Way halo.
2010-06-17
Jiang, semantics
Today my student Tao Jiang (working on galaxy evolution with the SDSS and SDSS-III data) passed his oral candidacy exam, and I gave my talk (remotely) at AstroInformatics2010. The latter was a bit too aggressive (Djorgovski asked me to be controversial!) and of course I had to tone it down in the question period: Indeed, the semantic world is considering issues with trust. But the only things people could point to on the issue that meta-data are probabilistic is the idea of fuzzy
data and fuzzy logic, and I seem to remember that these are not—in their current forms—defined in such a way that they could communicate (and be used for marginalization over) precise likelihood or posterior probability distributions. In the question period, one audience member even suggested (as I have been writing in my blog and elsewhere) that we think of catalogs as parameters of a model of the image pixels! So that was good.
2010-06-16
semantics
I worked today on getting some slides together for my talk at AstroInformatics2010 tomorrrow. I am giving my talk by Skype; we will see how that goes! I plan to be pretty controversial, as I have been asked to be; my working title is Semantic astronomy is doomed to fail.
2010-06-15
quasar target selection
I did lots of small things today. Perhaps the least small was to help a tiny bit with Bovy's proposal to change SDSS-III BOSS quasar target selection over to a method that makes principled use of extreme deconvolution (XD). The Bovy method has several advantages over the current method, including:
- It is lightweight. The model is completely specified by a (large) number of Gaussians in photometry space; it can be implemented by any code easily given those Gaussian parameters.
- It is the best performing method of all those tested by the Collaboration.
- It does not, in any sense, convolve the model with the errors twice. Most empirical distribution function descriptions are descriptions of the error-convolved distribution; when you compare with a new data point you effectively apply the uncertainty convolution to the distribution twice. Not so for XD.
- It is the easiest method to extend in various ways including the following: You can modify the priors easily, even as a function of position on the sky. You can add data from GALEX or UKIDSS or variability studies straightforwardly. You can apply a utility to make the method prefer quasars that are more useful for the measurements of interest.
In general, when you need to do classification you should—if you have measurements with uncertainties you believe—model distribution functions for the classes, and when you want to model distribution functions using uncertainty information correctly you should use XD!
2010-06-14
target selection, trust, and funding
The NSF proposal is submitted. It is about—among other things—a trust model for the VO. I think whether or not we get funded, Lang and I should write this up as a short paper, since we figured a fair bit out about the issues. Not sure if that's an LPU though. Spent a good deal of time with Bovy, briefing him on what I got from the Gaia meeting last week, and strategizing about BOSS target selection, where it seems like maybe we should make a firm proposal for being the primary algorithm. In related news, Jessica Kirkpatrick (Berkeley) was visiting today (and will be back next week for SciCoder).
2010-06-12
write like the wind
On the airplane home, I drafted the entirety of my ELSA Conference proceedings contribution. I guess I got pretty stoked at that meeting!
2010-06-11
ELSA Conference on Gaia, day five
This morning was the final session of the meeting. Again, there
were too many things to mention. Helmi (Groeningen) began the day
with a discussion of streams and substructure. Among other things,
she showed that substructure falls onto the Galaxy along preferred
directions, so features like the famous bifurcation
in the
Sagittarius stream is not unlikely caused by a pair of galaxies
infalling along the same plane! She suggests looking for streams in
the 3-dimensional space of actions; this is a good idea, but a young
stream also is compact (1-dimensional) even in the action space, so
there is more information than just what you find in action space.
At the end of the meeting, I gave my talk on catalogs as models and the idea of propagating likelihood information about image intensity measurements rather than a hard catalog. I realized during the question period that this is my main problem with the semantic ideas in astronomy, and that should be a part of my write-up. Brown (Leiden) followed with a concluding remarks talk in which he was extremely nice to me, but also advocated releasing Gaia results early and often, and releasing the results in such a way that they could be re-analyzed at the pixel level in the future. I wholeheartedly agree. Farewell to a wonderful meeting.
2010-06-10
ELSA Conference on Gaia, day four
Today I gave Bovy's talk on inferring dynamical properties from kinematic snapshots. The talk noted that in our April Fools' paper, the planets with velocity closest to perpendicular to position do the most work. After, Juric (Harvard) asked if there are similarly important stars for the Milky Way? If so, we could just observe the hell out of those and be done! My only response is that if we had a sample of realizations of the Milky Way that generate the data—that is, detailed dynamical models—the stars whose properties vary the most across the sampling would be the critical stars. Not a very satisfying answer, since that chain would be almost impossible to produce, maybe even by 2020. Interestingly, my attitude about this turns out to be very similar to that of Pfenniger (Geneva), who gave a very nice philosophical talk about how our vision of the simplicity or complexity of the Milky Way has evolved extremely rapidly. He argued that no smooth models are likely to give right answers, not even approximately right. I don't think everyone in the business would agree with that, though I do. Before either of our talks, Juric gave a summary of his three-dimensional Milky Way modeling using photometric data. He, of course, finds that simple models don't work, but he has some great tools to precisely model the substructure.
Ludwig (Heidelberg) argued that either there are systematic issues with chemical abundance indicators or else even single stellar populations have metallicity diversity. This would do damage to the chemical tagging plans. In conversation afterwards, Freeman (ANU) told me that he thinks that the issue is with the models not the populations and that the models will improve or are improved already. I suggested to Freeman an entirely data-driven approach, but I couldn't quite specify it. This is a project for the airplane home.
Pasquato (Brussels) made very clearly the point that if stars have strong surface features that evolve (as when they have highly convective exteriors), there will be astrometric jitter if the star has an angular size that is significant relative to Gaia's angular precision. This is obvious, of course (and I mean that in the best possible way), but it suggests that maybe even parallax
needs to be defined like radial velocity
was yesterday!
Antoja (Barcelona), Minchev (Strasbourg), and Babusiaux (Paris) gave talks about the influence of the bar and spiral structure on the Galaxy and galaxies in general. Minchev presented his now-famous results on radial mixing, which relate in various ways to things we have been doing with the moving groups. Antoja was more focused on understanding the specific impact of specific spiral arm and bar models on the local distribution functions. Babusiaux nearly convinced me that the bulge of the Milky Way was created entirely from the disk by the action of the (or a) bar. Bars rule! After dinner, Minchev and I found ourselves at a bar.
2010-06-09
ELSA Conference on Gaia, day three
It was spectroscopy all morning, with the Gaia Radial Velocity Spectrograph worked-over by Cropper (UCL), Katz (Paris), and Jasniewicz (Montpellier). Two things that were of great interest to me: The first is that the satellite can only do relative measurements of velocity (really of logarithmic Doppler Shift, I think) so they need absolute velocity standards. They will use a combination of asteroids and team-calibrated (from the ground) radial velocity standards that span the sphere and spectral type. The issue is more fundamental, of course: To compare stars of different types, you need to tie together radial velocity standards that are based on different lines with different gravitational redshifts and convective corrections. But the issue is even more fundamental, and that is the second thing that was of great interest to me: Lindegren (Lund) has a paper defining radial velocity
that shows that it is not really a spectroscopically measured quantity: There are many physical effects that affect stellar lines. Indeed, this relates in my mind to an old conversation with Geha (Yale) and Tremaine (IAS) about whether it is possible—even in principle—to measure a velocity dispersion smaller than Geha's smallest measurements. I think it is but it is because God has been kind to us; this was not something we could expect just by dint of hard work.
Freeman (ANU) gave a talk about HERMES, an ambitious project to take detailed enough spectra of about a million stars to do full chemical tagging
and separate them into thousands of unique sub-groups by detailed chemical properties. He made an argument that the chemical space will be larger than (though not much larger than) a seven-dimensional space. I have been hearing about chemical tagging for years and seen nothing, but with this it really looks like it might happen. This project is explicitly motivated by the science that it enables in the context of Gaia. Where are the American projects of this kind? Also, when will we have the technology or money to take spectra of a billion stars?
In the afternoon, there were Solar-System and asteroid talks; these are always impressive because precisions in this field are so much higher than anywhere else in astronomy. Oszkiewicz (Helsinki) gave a nice talk about fully Bayesian orbit fitting for asteroids, showing that they could have predicted the asteroid hit in Sudan in 2008 with high confidence (did they predict that?). She also showed that they can model asteroid shapes from Gaia lightcurve data with the same kind of MCMC machinery.
2010-06-08
ELSA Conference on Gaia, day two
There is far more to report here from today's than I easily can, so I will just blurt some highlights once again:
Lindegren (Lund University) gave a beautiful talk about fundamental issues in astrometry, especially with spinning satellites. Gaia—unlike Hipparcos will work in the limit in which the spin is more strongly constrained by the density of informative stellar transits than it could be by any reasonable dynamical model of the spinning satellite subject to torques. That is, there is only a very weak dynamical model and the data do the talking. This means that any measurement by the satellite that can be accommodated by an attitude change is not constraining on the global astrometric solution! For this reason, with each of the fast rotations (scans) of the satellite, the only constraining measurements made on the global astrometric solution are comparisons between stellar separations and the basic angle of the satellite, along the direction of the scan. It really is a one-dimensional machine, at least on large scales. The two-dimensional images off the focal plane will be useful transverse only on small scales. He followed these beautiful fundamental arguments with discussions of self-calibration of the satellite, which is really what all the talks have been about these two days, in some sense.
Bombrun (ARI, Heidelberg) and Holl (Lund) gave back-to-back talks about the optimization of the linearized system of equations and the error propagation. Optimization (after the Collaboration found conjugate-gradient method) is trivial, but exact error propagation—even for the linearized system—is impossible. That's because the sparse matrix of the linear system becomes very non-sparse when you square and invert it. Holl has some very nice analytic approximations to the inverted matrix, made by expanding around invertible parts, and by making simplifying assumptions. This is key for the first generation of error propagation. In my talk I will emphasize that if the Collaboration can expose something that looks like the likelihood function, error propagation is trivial and it becomes the burden of the user, not the Collaboration. However, there is no chance of this in the short run.
Eyer (Geneva) gave an electrifying talk about variable stars, making clear what should have been obvious: Gaia will be the finest catalog of variable stars ever made, and contain in almost every class of variability hundreds of times more stars than are currently known. This opens up the possibility for all kinds of novel discovery, and enormous improvements in our understanding of the variables we now know. His group is computing observability of various kinds of variables and the numbers are simply immense. He noted that Gaia might discover WD–WD eclipsing-binary gravitational wave sources.
At the end of the day Mahabal (Caltech) spoke of automated transient classification. They are doing beautiful things in the VO/semantic framework. Of course I am a critic of this framework, because I want meta-data to be probabilistic and computer-generated, not human-designed and combinatoric (as in this is a Type IIP Supernova
; much more useful to be given relative likelihoods of all non-zero possibilities). But there is no doubt that within this framework they are doing beautiful stuff.
2010-06-07
ELSA Conference on Gaia, day one
Today was the first day of the ELSA Conference. Talks focused on the status of Gaia hardware and software, with a lot of concentration on the expected effects of radiation damage to the CCDs and strategies for dealing with that. There were many great contributions; a few facts that stuck in my mind are the following:
Prusti (ESA) emphasized the point that the intermediate or preliminary catalog released in 2015 will only be slightly less good than the final catalog of 2020, if all goes well. He argued that Gaia should release preliminary catalogs since surveys like SDSS have prepared the astronomical community for non-permanent and non-final and buggy catalogs. I agree.
Charvet (EADS) showed the spacecraft design and assembly. It is made from silicon carbide, not metal, for rigidity and stability. Apparently this makes fabrication much more difficult. The mirrors are silvered, polished silicon carbide, attached to a silicon carbide skeleton. The machining of the parts is good to a few microns and there is an on-board interferometer that continuously measures the internal distances relevant to the basic angle
(between the two lines of sight) at the picometer level. It also has an on-board atomic clock. He strongly implied that this is the most challenging thing anyone at EADS has worked on.
van Leeuwen (Cambridge) spoke about spacecraft spin and attitude. He showed that his re-reduction of the Hipparcos catalog (published in book form) came from permitting the spacecraft to be jerked or clanked
by configurational changes (related to temperature changes?). He found 1600 such events in the time-stream and when he modeled them, the systematic errors in the data set went down by a factor of five. I commented after his talk that the real message from his talk was not about spacecraft attitude but about the fact that it was possible to re-analyze the raw data stream. The Gaia position seems to be that raw data will be preserved and that re-analyses will be permitted, as long as they don't cost the Consortium anything. That's fine with me.
O'Mullane (ESA) gave a romp through the computer facilities and made some nice points about databases (customizability and the associated customer service is everything). He does not consider it crucial to go with open-source, in part because there is an abstraction layer between the software and the database, so change of vendor is (relatively) cheap. He then went on to say how good his experience has been with Amazon EC2, which doesn't surprise me, although he impressed me (and the crowd) by noting that while it takes months for ESA to buy him a computer, he can try out the very same device on EC2 in five minutes. That's not insignificant. From a straight-up money point of view, he is right that EC2 beats any owned hardware, unless you really can task it 24-7-365.
2010-06-06
proposal
I worked on the proposal—and other writing projects—on the airplane and in my room at the ELSA conference on Gaia. Still not far enough along!
2010-06-04
too little
I wrote way too little in my NSF Software Infrastructure for Scientific Innovation (NSF's words) grant proposal; it is due on Monday!
2010-06-03
NIPS paper submitted
Lang pulled out all the stops and got our NIPS paper submitted, three minutes before deadline. In the process, he found a large number of great examples of situations in which an image-modeling or catalog-as-model approach to the SDSS data improves the catalog substantially. The paper is pretty rough around the edges, but we will tune it up and post in on arXiv, in a few weeks, I very much hope.
2010-06-02
trusting robots; SciCoder
I worked on an upcoming NSF proposal (with Lang) on building robots that calibrate (and verify calibration meta-data for) astronomical imaging. The idea is to produce, validate, and make trustable the meta-data available through VO-like data interfaces. Right now an astronomer who makes a VO query and gets back hundreds of images spends the next two weeks figuring out if they really are useful; what will it be like when a VO query returns tens of thousands?
In the afternoon, Demetri Muna (NYU), Blanton, and I discussed Muna's SciCoder workshop, where Muna is going to have the participants learn about databases, code, and data interaction by doing real scientific projects with public SDSS data. I am pretty excited about the idea. Right now Muna is doing some of the projects ahead of time so he is prepared for all the bumps in the road.
2010-06-01
perturbed streams
Kathryn Johnston appeared in my office today and we spent a few hours discussing simple calculations of perturbations to tidal streams by CDM-substructure-like satellites. She has greatly simplified the confused thoughts I had about them a year or two ago.