I finally got the response to referee completed on the Masjedi et al paper on the mass growth of LRGs through merging (major and minor). This task took ages, despite a very straightforward and constructive referee report. I celebrated.
Barron, LeCun, and I complexified our project on modeling galaxy images with three-dimensional galaxy models by considering the case of absorbing dust. We decided that we either have to severely restrict the possible dust geometries or else move to a full
ray tracing or computer graphics approach. I prefer the latter, but Barron is (rightly) concerned about speed. The long-term goal of the project is for the computer to simultaneously classify and model all galaxies, and also choose optimal or natural, data-driven parameterizations of the model space.
Zolotov and I spent some time discussing ways to investigate the dark matter distribution and assembly history for the Milky Way using observations of stars. The approach, with Willman, is to perform observational experiments on simulations of Milky-Way-like galaxies taken from cosmological simulations, in the hope of finding connections between observables and fundamentals that are robust and useful. In the short term, my small job in this project is to repeat, on the simulations, the analysis performed by Eric Bell (and us) in our paper on quantifying substructure in the Milky Way halo, but with the advantage that unlike with real observations, I know exactly the distribution of stars and dark matter in six-dimensional phase space.
At lunch I described Bovy, Moustakas, and my tentative detection of dust absorption in galaxy clusters using background luminous red galaxies. In the afternoon, Barron and I discussed how to flexibly include dust in our three-dimensional models of galaxies.
Art Congdon (Rutgers) gave a nice group-meeting talk in which he showed analytical approaches to understanding the effect of substructure on image magnification ratios and time delays. It is important to have an analytical structure based on perturbation theory, because otherwise you never know whether you are arguing for substructure on the basis of an incompete search of model space.
Raphael Bousso (Berkeley) surprised me in the Physics Colloquium by saying some things about the anthropic principle that were not wrong. This is surprisingly rare. My principal objection to the principle is that there is no
functional we can apply to a theory and determine the existence of or density of
observers. I have secondary objections relating to whether it is observers we mean at all! But most talks we have had at NYU make an even more basic mistake, which is that of not clearly distinguising the need for observers from the observational matter that there are observers of the human type in our Universe. If you use the anthropic principle, ask yourself this: How does your
anthropic constraint on your theory differ from an observational constraint that the observed Universe contains, say, galaxies or structure or carbon or metals or stars or people?
The fact that the Universe contains observers of our type is an observation, not a principle. The fact that any observed Universe must contain observers may qualify as a principle.
In related news, Bousso also gave a very nice argument about the cosmological constant problem, which I had not appreciated. You might think that lambda (plus vacuum energy corrections) gets set to exactly zero at the big bang by some requirement on cosmological initial conditions. That's a good idea! But then at the electroweak phase transition, the vacuum energy density drops by an amount that is some 50 or 60 orders of magnitude larger than the currently observed value for lambda. So if it is a fine-tuning that is performed by the Universe, it must be fine-tuned before electroweak to a value that makes it very close to zero after electroweak. Nice demolition, that! He had various other good arguments for the problem being a very serious problem, and proceeded to use these issues to motivate an anthropic selection from the string landscape.
I spent the afternoon in-between meetings trying to plan my sabbatical next semester. It is important that the entire time go to research.
With the end of the semester, it has been two relatively research-free days, I am embarrassed to report. I did spend some time talking with LeCun and Barron about our galaxy modeling project. Blanton and I had some discussions with the deans and provost's office and sponsored programs about meeting the obligations incurred as we join SDSS-3. And Cedric Deffayet (Paris) and Spencer Chang (NYU) confirmed my suspicions that the
Casimir Effect has a conventional explanation that does not involve vacuum energy density.
In a miracle of sorts, the good people at NASA have (partially) funded the Astrometry.net project for the purposes of creating a multi-wavelength catalog out of the data taken by various NASA missions. I spent some time today figuring out how this project meshes with the short-term goals of the Astrometry.net project and working out the first steps.
Dan McIntosh (UMass) gave a nice group meeting talk on galaxy mergers in SDSS as a function of mass, mass ratio, and environment. During the talk I think we all realized that measures of the merger rate are in some sense more precise measures of galaxy evolution than measures of the differences between different redshifts, but they are not necessarily more accurate because they involve substantial uncertainties in their interpretation.
Finally I returned to my summer project of distinguishing brown dwarfs from high-redshift quasars via proper motions. The faithful reader will recall that the method is to use proper motions determined in multi-epoch imaging in which the source of interest is not significantly detected at any single epoch. I spent the morning remembering what I had done and re-running the code on new data provided by Jester (many months ago).
Today Barron demonstrated to me a prototype system that can take an archival scanned plate (even of poor quality) and figure out the photographic emulsion with which it was taken. He uses the brightness ranking of the sources in the image, which is not completely trivial given the strong saturation in scanned photographic plates. We are close to our evil plan of being able to reconstruct all calibration meta-data for any image in any state of archival laxity.
I re-wrote the part of Masjedi's recent paper in which we describe how we integrated the correlation functions, because it turns out that we didn't describe it quite correctly. We did the right thing in the presence of possible software issues at very small scales; but certainly not what you would do if you weren't worried about the software.
Barron and I spent the afternoon discussing his project to measure the dates at which images were taken. To do the best job, he must measure the best centroids; up to now he has been using Robert Lupton's (Princeton) algorithm implemented in the SDSS image processing code. This algorithm fits a set of one-dimensional parabolae; it is a fast numerical approximation to fitting the peak of each source with a two-dimensional parabolic surface. Jon's new centroiding code actually does the full-up parabolic surface fit to the peak of each source, and it appears to return centers that are better than the Lupton code, although at the (significant) expense of compute time.