I spent the day at the Princeton APOGEE science workshop where all the talks were about stellar abundances, kinematics, and inferences therefrom. Of particular interest to me at the moment was James Binney's (Oxford) because he spoke in part about dynamical projects that are related to things we are thinking about at NYU.
Spent a great morning and lunchtime pair-coding the two-point galaxy–quasar-line-of-sight project with Schiminovich. It is much more efficient for two people—if they get along—to sit side-by-side and write all the code together than it is for them to each write half separately. I don't fully understand why, but part of it is bug-catching (second pair of eyes) and part of it is motivation (
stop typing so slowly!). Whatever it is, it is fun, and we got a lot done. The mean quasar images that we are producing are astounding in signal-to-noise and systematics.
I got some satisfying research time in on the emission-line archetype project. I wrote a detailed description of the
representation relationship such that we can say that spectrum i represents spectrum j. It is then a matter of computing all the representation relationships and finally an integer programming problem to find the minimum-sized subset of the spectra such that every spectrum is represented. Integer programming is NP-hard (I think) in general, but damn the open source glpk toolkit is pretty incredible (fast, and often finds the global minimum for problems of interest).
In the spirit of
always having synchronized documents and code (documents say what the code does, code does what the documents say), I worked on the code to catch up with where I am in the document.
I couldn't help returning to the archetypes project of way back after Moustakas got me all charged up about emission line phenomenology again at group meeting today. I want to look at the emission lines in an empirical way, with the help of good engineering, and see if we can do better on issues of star formation rates, active galactic nuclei, and metallicity. Unfortunately, engineering approaches are all limited if you don't have a good training set or set of truth values, which we don't, but I am still excited to see what the approach brings us.
I forgot to mention in my gushing Wednesday post that I also had a long conversation with Johnston and her student Yoon about the different physical (or approximation) regimes for a cold stellar stream perturbed by compact substructures or masses in the Galaxy halo. We came up with many different regimes (different rankings of quantities with the same dimensions). This got me confused, but I have spent some part of the weekend trying to get straight.
Schiminovich, Wu, and I continued our work from yesterday. In the afternoon, Mark Reid (CfA) gave the Physics Colloquium on the rotation speed and mass of the Milky Way; he finds a larger mass and rotation velocity than the
standard values; this may resolve several Milky Way mass discrepancies that are currently outstanding.
Today was a great day—all research—it was like a mental health day. Schiminovich and I actually created a quasar–photon cross-correlation function (what any sane person would call an average quasar image) for some color and redshift cuts; we plan to do this as a function of redshift and color to track quasar properties, intergalactic absorption, and (if we are lucky) scattering. But see my posts from late last spring to see how the last attempt panned out!
We also worked with Wu on the Spitzer IRS spectroscopy. Wu got the coadding working with aggressive masking and we have some very nice, high signal-to-noise (thank goodness) galaxy spectra. As we are in the late stages of the Spitzer cold mission lifetime, when targets are getting sparse and the cryogen has outlasted predictions, our statistical program has been very well scheduled and we therefore have lots of data.
I worked on various papers this morning, including Zolotov's paper on halo formation and its accessibility to observations, and mine on catalog matching and on imaging dark-matter substructure using cold streams.
In what little research time I had in the last few days I worked on the problems of catalog matching and of image modeling, which I think may end up being the same problem.
Last Thursday, Josh Frieman (Chicago, FNAL) gave a nice Department Colloquium about the SDSS-II supernova survey. He showed that conclusions about the dark energy were a function of analysis technique, because there are covariances between what is assumed about dust attenuation and what is assumed about the
Phillips relation between Type Ia supernova brightness and time to fade. This opens up a few great questions in inference: It is a place where the freedom given to the inference affects the results. Time to get all Bayesian, I bet!
Schiminovich and I re-tooled our GALEX–USNO project (build an all-sky quasar catalog, using the SDSS as a training set) into a GALEX–2MASS project. It appears that the 2MASS photometry is better suited to our purposes, although the 2MASS source density is not as high.
Schiminovich, Wu, and I looked at the pixel level at Spitzer IRS spectra. There are some very strange things in the statistical properties of the sky frames that we don't understand. But it appears that we may be able to do science without fully understanding this.
I cranked out text for an introduction to a paper on phase-space structure in the Galaxy and its use for inferring the Galaxy's dynamical state and history. I have absolutely no idea what we are going to say about this problem, but oddly that uncertainty in no way restricted or impaired what I could write. I sent it to Johnston, Sharma (Columbia), and Bovy for comments.
I have been working on automated calibration and vetting of data for a while. At the same time, I have been telling people that the only way to know whether your data are okay is to do some real scientific investigation with those data. The logical conclusion of this is that if we want hard-core automated data vetting, we need robots that do science. I wrote about this in my contribution for the upcoming CESS 2009.
This all ties into our craziness about the
Theory of Everything or simultaneous modeling of all astronomical imaging. We have figured out—in principle—a way to do this fitting so that a robot can make
discoveries as it ingests new data; any data useful for a real discovery is clearly vetted at some level.
At the brown-bag today, Iggy Sawicki talked about one of my favorite ideas: Make the dark matter out of copies of the light sector, but many, many very sparsely populated copies. You can think of these as other "folds" of a brane on which we live, or you can think of them as having some quantum number different from everything in our sector. But the point is, you can make the dark matter without postulating any novel particles or interactions at all. The key is that you need to make many other sectors and populate them all sparsely, otherwise the dark matter doesn't appear collisionless (or close thereto).