Showing posts with label intergalactic medium. Show all posts
Showing posts with label intergalactic medium. Show all posts

2023-03-09

L2G2

Today I went to the L2G2 (Local Local Group Group) meeting at Columbia. This meeting started with a dozen of us around a table and is now 50 people packed into the Columbia Astronomy Library! A stand-out presentation was by Grace Telford (Rutgers), who showed beautiful spectroscopy of low-metallicity O stars. From their spectral features and (in one case) surrounding H2 region, she can calculate their production of ionizing photons. This is all very relevant to the high redshift universe and reionization. Afterwards, Scott Tremaine (IAS) argued that The Snail could be created by random perturbations, not just one big interaction.

2019-07-19

quasar lifetimes

Today Christina Eilers (MPIA) gave a great colloquium talk at MPIA about the intergalactic medium, and how it can be used to understand the lifetime of quasars: Basically the idea is that quasars ionize bubbles around themselves, and the timescales are such that the size of the bubble tells you the age of the quasar. It's a nice and simple argument. Within this context, she finds some very young quasars; too young to have grown to their immense sizes. What explanation? There are ways to get around the simple argument, but they are all a bit uncomfortable. Of course one idea I love (but it sure is speculative) is the idea that maybe these very young quasars are primordial black holes!

In other research today (actually, I think this is not research according to the Rules), I finished a review of a book (a history of science book, no less) for Princeton University Press. I learned that reviewing a book for a publisher is a big job!

2017-08-04

M-dwarf spectral types; reionization

Jessica Birky (UCSD) and I met with Derek Homeier (Heidelberg) and Matthias Samland (MPIA) to update them on the status of the various things Birky has been doing, and discuss next steps. One consequence of this meeting is that we were able to figure out a few well-defined goals for Birky's project by the end of the summer:

Because of a combination of too-small training set and optimization issues in The Cannon, we don't have a great model for M-dwarf stars (yet) as a function of temperature, gravity, and metallicity. That's too bad! But on the other hand, we do seem to have a good (one-dimensional) model of M-dwarf stellar spectra as a function of spectral type. So my proposal is the following: We use the type model to paint types onto all M-dwarf stars in the APOGEE data set, which will probably correlate very well with temperature in a range of metallicities, and then use those results to create recommendations about what spectral modeling would lead to a good model in the more physical parameters.

Late in the day, José Oñorbe (MPIA) gave a great talk about the empirical study of reionization. He began with a long and much needed review of all the ways you can measure reionization, using radio imaging, lyman-alpha forest, damping wings, cosmic microwave background polarization, and so on. This brought together a lot of threads I have been hearing about over the last few years. He then showed his own work on the lyman-alpha forest, where they exploit the thermodynamic memory the low-density gas has about its thermal history. They get good results even with fairly toy models, which is very promising. All indicators, by the way, suggest a very late reionization (redshifts 7 to 9 for the mid-point of the process). That's good for observability.

2016-12-14

map-making, self-calibration, stellar rotation

In my stars group meeting at CCA, Ruth Angus (Columbia) told us about her work to replace standard methods for determining stellar rotation with a probabilistic model, based on Gaussian Processes with a quasi-periodic kernel. This seems to work extremely well! There are some pesky outlier stars, even in simulated data, in which all methods (including Angus's best) seem to get the wrong answer for the stellar rotation; these are interesting for further investigation.

In my cosmology group meeting, I got fully taken to school. First Brian Keating (UCSD) told us about self-calibration of CMB polarimetry. It turns out that you can't self-calibrate for some of the most important science. That is, your signal might be in exactly the modes to which the self-calibration is orthogonal! That's bad. And an aspect of self-calibration that I haven't thought about before. He discussed many crazy and creative ways to do absolute calibration of polarimetry devices; none of them look good enough (or cheap enough) at this point.

Then Colin Hill (Columbia) told us about map-making things he is working on in CMB data. I got all crazy because he is only considering linear combinations of observed data to (say) produce the thermal S-Z map (from, say, Planck data). But then he pointed out (correctly) that all least-squares methods return linear combinations of the data! Oh duh! All L2-like methods return linear combinations of the data. So then we went on to think about combined L1 and L2 methods that could permit him to open up his model space without enormously over-fitting. At the end of the discussion I had a job to do: Write down the largest class of convex map-making methods I can, given what I know about L1 and L2.

In between group meetings Cameron Hummels (Caltech) talked about open-source codes he is building that take simulation outputs from cosmological hydro simulations and predict observables, especially those that relate to the inter-galactic medium and circum-galactic medium. We talked a lot about the differences between resolution effects and sub-grid physics choices, which are confusingly inter-related.

2016-11-17

emission-line galaxy spectra

The day opened with a conversation with Guangtun Zhu (formerly JHU) who has been doing great things with the eBOSS and MaNGA spectra from SDSS-IV. On the former, he has made a composite (average) spectrum and can see many things that haven't been seen before in galaxies like these. He can see fluorescence from the outer ISM (or maybe IGM) and he can see the effects of other extremely weak emission and absorption lines. He can also see that the emission lines are due to outflows, but in great detail: Different lines with different relative amounts of absorption and emission have different profiles and he has a consistent story for all of these.

I ended the day by working on the text in the paper on image modeling (image differencing) by Dun Wang (NYU) and in the paper on data-driven galaxy SED models by Boris Leistedt (NYU).

2016-08-22

the best image differencing ever

I had the pleasure today of reading two draft papers, one by Dun Wang on our alternative to difference imaging based on our data-driven pixel-level model of the Kepler K2 data, and the other by Huanian Zhang (Arizona) on H-alpha emission from the outskirts of distant galaxies. Wang's paper shows (what I believe to be) the most precise image differences ever created. Of course we had amazing data to start with! But his method for image differencing is unusual; it doesn't require any model of either PSF nor the difference between them. It just empirically figures out what linear combinations of pixels in the target image predict each pixel in the target image, using the other images to determine these predictor combinations. It works very well and has been used to find microlensing events in the K2C9 data, but it has the disadvantage that it needs to run on a variability campaign; it can't be run on just two images.

The Zhang paper uses enormous numbers of galaxy-spectrum pairs in the SDSS spectroscopic samples to find H-alpha emission from the outskirts of (or—more precisely—angularly correlated with) nearby galaxies. He detects a signal! And it is 30 times fainter than any previous upper limit. So it is big news, I think, and has implications for the radiation environments of galaxies in the nearby Universe.

2016-07-14

ABC is hard; The Cannon with missing labels

I spent some time discussing ABC today with Joe Hennawi (MPIA) and Fred Davies (MPIA), with some help from Dan Foreman-Mackey. The context is the transmission of the IGM (the forest) at very high redshift. We discussed the distance metric to use when you are comparing two distributions, and I suggested the K-S statistic. I suggested this not because I love it, but because there is experience in the literature with it. For ABC to work (I think) all you need is that the distance metric go to zero if and only if the data statistics equal the simulation statistics, and that the metric be convex (which perhaps is implied in the word “distance metric”; I'm not sure about that). That said, the ease with which you can ABC sample depends strongly on the choice (and details within that choice). There is a lot of art to the ABC method. We don't expect the sampling in the Hennawi–Davies problem to be easy.

As part of the above discussion, Foreman-Mackey pointed out that when you do an MCMC sampling, you can be hurt by unimportant nuisance parameters. That is, if you add 100 random numbers to your inference as additional parametersL, each of which has no implications for the likelihood at all, your MCMC still may slow way down, because you still have to accept/reject the prior! Crazy, but true, I think.

In other news, Christina Eilers (MPIA) showed today that she can simultaneously optimize the internal parameters of The Cannon and the labels of training-set objects with missing labels! The context is labeling dwarf stars in the SEGUE data, using labels from Gaia-ESO. This is potentially a big step for data-driven spectral inference, because right now we are restricted (very severely) to training sets with complete labels.

2016-07-10

small telescopes and low intensities

I spent some research time on the weekend working on the question that Joe Hennawi (MPIA) asked me on Friday: What is the sensitivity of a telescope and detector to very faint features on the sky as a function of aperture diameter and focal ratio? There are various regimes to consider. In one, the object is much smaller than a resolution element on the focal plane (the point source limit). In another, much larger, but still much smaller than the detector as a whole. In another, larger even than the detector array. There are slightly different answers in each case, but the large telescope does better in the first two cases, and even in the third if the faint object has any structure on the scale of the detector. I wrote words about this and wondered if there was something to publish (informally or otherwise). Of course small telescopes often have much better optics and scattered light properties, so this isn't the end of the story!

2015-08-27

reionization

At MPIA Galaxy Coffee, K. G. Lee (MPIA) and Jose Oñorbe (MPIA) gave talks about the intergalactic medium. Lee spoke about reconstruction of the density field, and Oñorbe spoke about reionization. The conversations continued into lunch, where I spoke with the research group of Joe Hennawi (MPIA) about various problems in inferring things about the intergalactic medium and quasar spectra in situations where (a) it is easy to simulate the data but (b) there is no explicit likelihood function. I advocated likelihood-free inference or ABC (as it is often called), plus adaptive sampling. We also discussed model selection, and I advocated cross-validation.

In the afternoon, Ness and I continued code review and made decisions for final runs of The Cannon for our red-giant masses and ages paper.

2014-10-31

quasars! exoplanets! dark matter at small scales!

CampHogg group meeting was impressive today, with spontaneous appearances by Andreu Font-Ribera (LBL), Heather Knutson (Caltech), and Lucianne Walkowicz (Adler). All three told us something about their research. Font-Ribera showed a two-dimensional quasar—absorption cross-correlation, which in principle contains a huge amount of information about both large-scale structure and the illumination of the IGM. He seems to find that IGM illumination is simple or that the data are consistent with a direct relationship between IGM absorption and density.

Knutson showed us results from a study to see if stars hosting hot Jupiters on highly inclined (relative to the stellar rotation) orbits are different in their binary properties from those hosting hot Jupiters on co-planar orbits. The answer seems to be "no", although it does look like there is some difference between stars that host hot Jupiters and stars that don't. This all has implications for planet migration; it tends to push towards disk migration having a larger role.

We interviewed Walkowicz about the variability of the Sun (my loyal reader will recall that we loved her paper on the subject). She made a very interesting point for future study: The "plage" areas on the Sun (which are brighter than average) might be just as important as the sunspots (which are darker than average) in causing time variability. Also, the plage areas are very different from the sunspots in their emissivity properties, so they might really require a new kind of model. Time to call the applied mathematics team!

In the afternoon, Alyson Brooks (Rutgers) gave the astro seminar, on the various issues with CDM on small scales. Things sure have evolved since I was working in this area: She showed that the dynamical influence of baryonic physics (collapse, outflows, and so on) are either definitely or conceivably able to create the issues we see with galaxy density profiles at small scales, the numbers of visible satellites, the density distribution of satellites, and the sizes of disk-galaxy bulges. On the latter it still seems like there is a problem, but on the face of it, there is not really any strong reason to be unhappy with CDM. As my loyal reader knows, this makes me unhappy! How can CDM be the correct theory at all scales? All that said, Brooks herself is hopeful that precise tests of CDM at galaxy scales will reveal new physics and she is doing some of that work now. She also gave great shout-outs to Adi Zolotov.

2014-09-16

AstroData Hack Week, day 2

The day started with Huppenkothen (Amsterdam) and I meeting at a café to discuss what we were going to talk about in the tutorial part of the day. We quickly got derailed to talking about replacing periodograms and auto-correlation functions with Gaussian Processes for finding and measuring quasi-periodic signals in stars and x-ray binaries. We described the simplest possible project and vowed to give it a shot when she arrives at NYU in two months. Immediately following this conversation, we each talked for more than an hour about classical statistics. I focused on the value of standard, frequentist methods for getting fast answers that are reliable, easy to interpret, and well understood. I emphasized the value of having a likelihood function!

In the hack session, I spoke with Eilers (MPIA) and Hennawi (MPIA) about measuring absorption by the intergalactic medium in quasars subject to noisy (and correlated) continuum estimation. Foreman-Mackey explained to me that our failures on K2 the previous night were caused by the inflexibility of the (dumb) PSF model hitting the flexibility of the (totally unconstrained) flat-field. I discussed Gibbs sampling for a simple hierarchical inference with Sick (Queens). And I went through agonizing rounds of good-ideas-turned-bad on classifying pixels in Earth imaging data with Kapadia (Mapbox). On the latter, what is the simplest way to do clustering in the space of pixel histograms?

The research day ended with a discussion of Spectro-Perfectionism (Bolton and Schlegel) with Byler (UW). I told her about the long conversations among Roweis, Bolton, and me many years ago (late 2009) about this. We decided to do a close reading of it (the paper) tomorrow.

2014-07-24

dust priors and likelihoods

Richard Hanson and Coryn Bailer-Jones (both MPIA) and I met today to talk about spatial priors and extinction modeling for Gaia. I showed them what I have on spatial priors, and we talked about the differences between using extinction measurements to predict new extinctions, using extinction measurements to predict dust densities, and so on. A key difference between the way I am thinking about it and the way Hanson and Bailer-Jones are thinking about it is that I don't want to instantiate the dust density (latent parameters) unless I have to. I would rather use the magic of the Gaussian Process to marginalize it out. We developed a set of issues for the document that I am writing on the subject. At Galaxy Coffee, Girish Kulkarni (MPIA) gave a great talk about the physics of the intergalactic medium and observational constraints from the absorption lines in quasar spectra.

2014-07-23

quasar continuum blueward of Lyman alpha, Galactic center

If you go to the blue side of Lyman alpha, at reasonable redshifts (say 2), the continuum is not clearly visible, since the forest is dense and has a range of equivalent widths. Any study of IGM physics or radiation or clustering or ionization depends on an accurate continuum determination. What to do? Obviously, you should fit your continuum simultaneously with whatever else you are measuring, and marginalize out the posterior uncertainties on the continuum. Duh!

That said, few have attempted this. Today I had a long conversation with Hennawi, Eilers, Rorai, and KG Lee (all MPIA) about this; they are trying to constrain IGM physics with the transmission pdf, marginalizing out the continuum. We discussed the problem of sampling each quasar's continuum separately but having a universal set of IGM parameters. I advocated a limited case of Foreman-Mackey and my endless applications of importance sampling. Gibbs sampling would work too. We discussed how to deal with the fact that different quasars might disagree mightily about the IGM parameters. Failure of support can ruin your whole day. We came up with a clever hack that extends a histogram of samples to complete support in the parameters space.

In Milky Way group meeting, Ness (MPIA) showed that there appears to be an over-density of metal-poor stars in the inner one degree (projected) at the center of the Milky Way. She is using APOGEE data and her home-built metallicity indicators. We discussed whether the effect could be caused by issues with selection (either because of dust or a different explicit selection program in this center-Galaxy field). If the effect is real, it is extremely interesting. For example, even if the stars were formed there, why would they stay there?

2014-07-22

extinction and dust, H-alpha photons

While "off the grid" for a long weekend, I spent time writing documents for Coryn Bailer-Jones (MPIA) and Dennis Zaritsky (Arizona). The former was about using spatial priors for inference of the three-dimensional dust density constrained by Gaia data. If you use a Gaussian Process spatial prior, you can perform the inference in extinction space (not dust space) and transfer extinction predictions to new points given extinction data without ever explicitly instantiating the dust density field. This is not a genius idea; it flows from the fact that any linear projection of a Gaussian pdf is itself a Gaussian pdf. The whole thing might not be computationally tractable, but at least it is philosophically possible. One issue with using a Gaussian Process here is that it puts support onto negative dust densities. I don't think that is a problem, but if it is a problem, the fixes are not cheap.

The latter document—for Zaritsky—is about finding the H-alpha photons that are coming from the outskirts of low-redshift galaxies by doing cross-correlations between SDSS spectroscopy and nearby galaxy centers. This project is designed to test or constrain some of the ideas in a talk at NYU by Juna Kollmeier a few months ago.

2014-04-10

probabilistic halo mass inference

In a low-research day, at lunch, Kilian Walsh pitched to Fadely and me a project to infer galaxy host halo masses from galaxy positions and redshifts. We discussed some of the issues and previous work. I am out of the loop, so I don't know the current literature. But I am sure there is interesting work that can be done, and it would be fun to combine galaxy kinematic information with weak lensing, strong lensing, x-ray, and SZ effect data.

2014-04-04

IGM, SNe, GPs

In the morning, Juna Kollmeier (OCIW) gave a great talk on the intergalactic radiation fields (called "metagalactic" for reasons I don't understand). She has found a serious conflict between what is computed by any reasonable sum of sources, what is inferred from the outskirts of galaxies, and what is needed for local IGM studies. One possible resolution, which she was not particularly endorsing, is heating from dark-matter decay or annihilation. Neal Weiner (NYU) loved that idea, for obvious reasons. During the talk, several good project ideas came up, some of them related to the kinds of things Schiminovich has been thinking about, and some related to SDSS-IV MANGA data. Kollmeier convinced us that a next-generation experiment will just see the IGM!

After lunch, Bob Kirshner (CfA) gave a nice talk about how much more precise supernova cosmology might become if we could switch to (or include) rest-frame near-infrared imaging. He endorsed WFIRST pretty strongly! He also agreed explicitly that getting more SNe is not valuable unless there are associated precision or redshift-distribution improvements. That is, the SNe are systematics-limited; hence his concentration on infrared data, where precision is improved.

Late in the afternoon, Vakili sketched out a fully probabilistic approach to interpolating the point-spread function in imaging between observed stars (to, for example, galaxies being used in a weak-lensing study). Again with the Gaussian Processes. They are so damned useful!

2013-09-12

Blandford

The highlight of a low-research day was a visit from Roger Blandford (KIPAC), who gave the Physics Colloquium on particle acceleration, especially as regards ultra high-energy particles. He pointed out that the (cosmic) accelerators are almost the opposite of thermal systems: They put all the energy very efficiently into the most energetic particles, with a steep power-law distribution. He made the argument that the highest energy particles are probably accelerated by shocks in the intergalactic media of the largest galaxy clusters and groups. This model makes predicitions, one of which is that the cosmic rays pretty-much must be iron nuclei. In conversations over coffee and dinner we touched on many other subjects, including gravitational lensing and (separately) stellar spectroscopy.

2013-08-22

cosmography, dust mapping, null data, discrete optimization

In a very full day, I learned about quasar-absorption-line-based mapping of the density field in large volumes of the Universe from K. G. Lee (MPIA), I discussed non-parametric methods for inferring the three-dimensional dust map in the Milky Way from individual-star measurements with Richard Hanson (MPIA), I was impressed by work by Beth Biller (MPIA) that constrains the exoplanet population by using the fact (datum?) that there are zero detections in a large direct-detection experiment, and I helped Beta Lusso (MPIA) get her discrete optimization working for maximum-likelihood quasar SED fitting. On the latter, we nailed it (Lusso will submit the paper tomorrow of course) but before nailing it we had to do a lot of work choosing the set of models (discrete points) over which fitting occurred. This reminds me of two of my soap-box issues: (a) Construction of a likelihood function is as assumption-laden as any part of model fitting, and (b) we should be deciding which models to include in problems like this using hierarchical methods, not by fitting, judging, and trimming by hand. But I must say that doing the latter does help one develop intuition about the problem! If nothing else, Lusso and I are left with a hell of a lot of intuition.

2013-08-15

baryons and dark matter

At MPIA Galaxy Coffee, Bovy talked about his work on pre-reionization cosmology: He has worked out the effect that velocity differences between baryons and dark matter (just after recombination) have on structure formation: On large scales, there are velocity offsets of the order of tens of km/s at z=1000. The offsets are spatially coherent over large scales but they affect most strongly the smallest dark-matter concentrations. Right now this work doesn't have a huge impact on the "substructure problem" but it might as we go to larger samples of even fainter satellite galaxies at larger Galactocentric distances. In question period there was interest in the possible impact on the Lyman-alpha forest. In the rest of the day, Sanderson (Groningen) and I kept working on action space, and Lusso (MPIA) and I continued working on fitting quasar SEDs.

2013-07-31

pop-III stars won't do it

In the morning, I had a long discussion with Whitmore (Swinburne), Finkbeiner (CfA), and Schlafly (MPIA) about Whitmore's spectral calibration issues and the time variation of the fine-structure constant. In that discussion we ended up deciding (a) that the calibration issue is most likely caused by the arc illuminating the instrument differently from any star, and (b) that this outcome is the best possible outcome for Whitmore, because it can be modeled and calibrated out.

In the afternoon, at Hennawi (MPIA) group meeting, many interesting things transpired. One is that Girish Kulkarni (MPIA) can show that when you combine all the competing constraints, it is very unlikely that "population-III" (primordial-abundance) stars can be huge contributors to the cosmic radiation density; they can't provide a significant fraction of the necessary reionizing photons. Another is that we strongly encouraged Beta Lusso (MPIA) to perform her SED modeling on a large number of SDSS quasars, and fast, to smack down some not-so-good recent results!

All the while, Patel kept working on the Sloan Atlas and the statistics of quasar light curves.