Yes, one- and two-sigma GALEX measurements do distinguish usefully between low- and high-redshift quasars.
The BOSS project on SDSS3 is taking spectra of hundreds of thousands of quasars. This requires selection based on existing imaging. Some kinds of quasars in the redshift range of interest (around 2.5) are confused in the imaging we have with blue stars and redshift-one quasars. None of these types of sources is detected in any GALEX catalog, but the redshift-one quasars do have, on average, a bit of ultraviolet flux. Today Schiminovich, Hennawi, and I found that there is hope that even two-sigma GALEX detections might serve as a strong veto of redshift-one contaminants.
The (obvious, in retrospect) punchline is: If you care about improving samples contaminated at the tens-of-percent level, you can use one- or two-sigma measurements; they aren't reliable, but they are as reliable as the sample you started with.
Price-Whelan and I nearly finished our response to the second referee report. Again, the referee made good suggestions, so no complaints. We hope to resubmit on Friday. No other real research, except a few minutes at the board with Jagannath at the holiday party.
Van Velzen gave a very good Brown-Bag talk today on the stellar tidal disruption event he and Farrar found in the SDSS Stripe 82 data. It is a beautiful candidate and suggests a high rate for such events. There was some suggestion that we might be able to enormously increase efficiency using GALEX. (GALEX has been used by Gezari and others to find a few already.)
Joe Hennawi (MPIA) showed up for a week today and we spent much of the day figuring out what we want to achieve in the areas of quasar target selection. Schiminovich came down and we decided to get some measurements done with GALEX; we are all optimistic that few-sigma measurements in GALEX pixels can break degeneracies in the visible selection of redshift-three quasar candidates from a sea of stars. We'll find out this week.
ps: WISE launched today. Good luck, people!
Jennifer Siegal-Gaskins (OSU) spoke about detection of the dark-matter annihilation signal in the Fermi data stream, by decomposing the diffuse emission into separate components, each of which has a particular energy spectrum and angular power on the sky. After the talk I suggested a generalization of her methods and started writing a document about them. She convinced me that Fermi is extremely promising for detection of dark-matter; much more promising than I had thought previously.
Roweis and I realized today that we should enforce much more than smoothness on the matrix that relates flux (in the observed object) to pixel values (in the CCD); we know that the matrix can be thought of as a narrow spectral trace convolved with an aperture-convolved, pixel-convolved point-spread function. So we should just model these two things (trace and PSF), and enforce smoothness there. That represents much stronger prior information for our inference, but it is sensible, in the sense that if we inferred a matrix that does not look like a trace convolved with a (slowly varying) PSF, we wouldn't believe it. We also worked out a bit about how to proceed.
I spent a great lunch and early afternoon with Maurizio Porfiri (NYU Poly), who is a mechanical engineer and applied mathematician working on dynamical networks of independent agents like fish. He has some sweet experiments in which his lab watches fish schooling, works out the schooling rules, and then inserts a robotic fish and uses the robotic fish to
take control of the school and drive it around! I discussed with him the possible relevance of his work to our ideas about building a collaboration between NYU, NYU Poly, NYU Abu Dhabi, and NYU Global that will create and run a global network of autonomous telescopes.
In the late afternoon, a group of high school students came from NEST high school in Manhattan to NYU to present original research they have done with the Spitzer Space Telescope on the AGN NGC 4051. They got time from the director's discretionary time and they have done a huge amount of work with the data and come to some interesting conclusions about the structure of the dust torus. Nice work!
Bovy and I discussed how we are going to measure proper motions from SDSS-III imaging (and SDSS-I and SDSS-II imaging) in comparison with USNO-B imaging. We want to model the pixels, but pragmatic considerations may force us to execute only a rough approximation to that lofty goal.
Great talks today from Daniel Grin (Caltech) on extremely precise cosmic recombination (it should be called
combination) calculations (he had to go into the hundreds of levels, with all the angular momentum states separately tracked), Paolo Spagnolo (INFN) on the possibilities for new physics in the LHC's first year of operation (when it is operating at partial power), and Keith C. Chan (NYU) on primodial non-Gaussianity measured from galaxy surveys like BOSS.
In the evening I balanced out the grueling talk schedule with a little relaxing work on the theory behind Roweis and my spectroscopic dreams.
On the weekend I spent time reading Bovy's next huge tome about the moving groups, in which he demonstrates that they are far more likely to be dynamically created than to be fossils of past star-formation events, and that within the dynamical options they are more likely to be generated by transient dynamical complexity (from, for example, spiral arms and similar inhomogeneities) than from long-lived dynamical resonances. This supports suggestions by Tremaine. We hope to finish the paper within days.
I also finally worked through and sent my comments on spectro perfectionism; in short I complained about their treatment of exposure time, darks, and biases; their restriction to invertible matrices; the fact that the final output they advocate has no Bayesian generalization and is not a point estimate of any simple statistical property of the data or spectrum; their timidity about huge matrices and huge least-squares problems; and the issues of consistently modeling cosmic rays or data with non-Gaussian noise. I never have this many complaints about a paper unless I love it (with one notable exception); don't get me wrong: I love the paper.
David Law (UCLA) gave our astro seminar today on his halo triaxiality result. The odd thing is that although they permit triaxiality, they find oblateness, with the principal axis in a weird direction. But then maybe it isn't so weird, because it looks like the weird direction might just be the orbital plane of the LMC. So is it just a result of the dynamical influence of an orbiting LMC? If so, that might end up
weighing the LMC!
I spent part of the morning doing the linear-in-eccentricity limit of low eccentricity Kepler radial velocity problem with Price-Whelan and Jagannath. We had a slow start but eventually got expressions for the fourier expansion to m=2 for small-eccentricity systems. After that, Price-Whelan plotted them up and showed them to be as accurate as we expected (that is, residuals were order e2).
In the afternoon, Roweis and I got fired up about spectroscopy. Here are the first few words of a document I started writing up about our project:
In a modern spectrograph, light from fibers or slits is dispersed onto a two-dimensional CCD or CCD-like detector, with (usually) one direction on the detector corresponding to an angular displacement on the sky and another (usually) close-to-orthogonal direction corresponding to wavelength. There are also slitless spectrographs, where one direction is a mixture of angular position and wavelength. The problem of spectroscopic data reduction is the problem of extracting the one-dimensional spectra—astronomical source flux densities as a function of wavelength—from the two-dimensional images. There is a literature on "optimal extraction" of these one-dimensional spectra; the best extraction methods treat the two-dimensional image pixels as data to be fit by a model that consists of a one-dimensional spectrum laid out geometrically on the device and convolved with a two-dimensional point-spread function (which might be a function of the atmospheric seeing or the device properties or both).
Although the optimal extraction literature solves some important problems, the hardest part of spectroscopic data reduction lies not in the extraction step but in the step of measuring or learning the geometric and point-spread functions themselves. These functions have various names in the literature but they can be expressed with a single rectangular matrix A...
I was out sick today, but in my bit of work time, Lang and I worked on fitting orbits for our Comet Holmes project. In general, fitting orbits blindly (that is, without a good first guess) is extremely difficult. We are working without having read the relevant literature; I think we may have to read something. Our MCMC works well, but our first guess is terrible.
In other news, Bovy and Murray and I finished the submittable version of our Solar System paper. Bovy submitted it to The Astrophysical Journal today.
Roweis, Blanton, and I had a lively talk about the recent Bolton & Schlegel paper on spectral data reduction. Roweis suggested a Bayesian generalization of it, the problem being, as Blanton points out, that we have to return to users what they want or expect, which might not be a posterior probability distribution over spectra! Then we moved to discussion of what Bolton & Schlegel call the matrix A. They treat this as known, but of course figuring out this matrix (which is the combined sensitivity of every pixel to every wavelength in every fiber) is really the hard part of spectroscopy.