2016-08-31

#AstroHackWeek, day three

Early in the morning, before #AstroHackWeek 2016 day three, I had a long phone call with Andy Casey (Cambridge) about The Cannon, RAVE, and Gaia DR1. He is trying to produce a catalog of detailed abundances from RAVE matched to Gaia T-GAS before the DR1 deadline, which is in two weeks! There are many hard parts to this project. One is to make a model of the red giants and a model of the main sequence, and somehow understand that these two models are physically consistent. Another is to get a training set of detailed abundances for main-sequence stars. We also talked about DR1 zero-day projects. I am still stumped as to what, exactly, I am going to do!

I also got a great email this morning from Megan Bedell (Chicago), demonstrating that the convection explanation is reasonable for the radial-velocity scatter she sees in her HARPS data. As my loyal reader may recall, we spent the last few months demonstrating that there is no evidence (and damn, did we search) that the HARPS pipeline is leaving radial-velocity information on the table. If it isn't, then the radial-velocity scatter must come from intrinsic stellar noise (or something much worse). What Bedell has shown is that the quantitative amplitude and granularity of stellar surface convection is sufficient to lead to meter-per-second jitter. Duh! Now we have to figure out how to deal with that. I have ideas, related to The Cannon.

The afternoon and evening of #AstroHackWeek was at GitHub headquarters in San Francisco, which is a (dangerously) fun place to hack. Jonathan Whitmore (Silicon Valley Data Science) gave a great presentation about all the crazy things you can do with a Jupyter notebook, which blew me away, and Phil Marshall (SLAC) gave a presentation to the company about how GitHub is integrated into astrophysics research these days (oh, and what features we would like).

From my (limited) perspective, the most important thing about day three was that Adrian Price-Whelan (Princeton) and I had the final realization that the importance sampling we wanted to do for the radial-velocity problem will not work. Sad, but true: We can't get a complex enough linear model that maps cleanly enough onto the Kepler problem. So we decided to switch gears today to Phil Marshall's favorite: Simple Monte Carlo. What we will do is sample the prior extremely densely, and then rejection sample to the posterior using the likelihood function. This is usually impossible! We will make it possible in this case by capitalizing on the linearity of two of our parameters: These two parameters we can analytically marginalize out at every sample in the four-dimensional space of non-linear parameters. That's our job for tomorrow.

Fail fast. That's what we are trying to do.

2016-08-30

#AstroHackWeek, day two

Today was the second day of #AstroHackWeek 2016. Josh Bloom (UCB) spoke in the morning about machine learning, with a great set of tutorials based on the Jupyter notebook and lots of experience introducing scientists to supervised machine learning. He emphasized Random Forest, of course!

In the afternoon, Matt Mechtley (ASU) and Dalya Baron (TAU) generalized my toy one-d-angle molecular imaging problem to a three-d angle problem. Mechtley also sped up and improved the generation of the fake data, and Baron also got the inference (including the stochastic gradient descent) to work end-to-end. Exciting!

Adrian Price-Whelan (Princeton) and I capitalized on the finite time span of any radial-velocity data set to make a safe period grid for inference, and then did exact sampling at each period in a mixture-of-sinusoids model. That's awesome! But we realized that if we try to go to second order—that is, put in a sine and cosine at half the period as well as at the period—there is no way to control the amplitudes such that we maintain both linearity (which is crucial to our exact sampling) and interpretability (in terms of the true orbital parameters, which is what we really want to sample in). So we decided to implement the first-order problem only today and make a judgement about whether it works for our purposes tomorrow. We got a long way, and made some awesome plots that clearly show the multimodality of the posterior for sparse data sets. I think this could still be a useful tool, even if it doesn't do everything for all customers.

2016-08-29

#AstroHackWeek, day one

They say you shouldn't mess with the timeline! #AstroHackWeek was so busy and full, I ended up not blogging properly during the week, and am writing these blog posts after the fact, based on telegraphic notes taken day-of. This is not uncommon here at Hogg's Research, and, for that, I apologize: Even when I write a post after the fact, I (misleadingly) date it for the day to which it corresponds (and give it a time stamp of one minute before midnight). One of the many reasons that this blog should not be seen as a precise historical document is that these after-the-fact blog posts can certainly be contaminated by present knowledge.

Today was the kick-off day for #AstroHackWeek, our now annual meeting at which participants learn about computational data analysis, and also work on their own computational data analysis projects. This year we had the meeting at the Berkeley Institute for Data Science (and partially supported it with the Moore-Sloan Data Science Environment that spans UCB, UW, and NYU). It was organized (beautifully) by Kyle Barbary (UCB) and Phil Marshall (SLAC).

In the morning today, both I and Jake VanderPlas (UW) spoke, about the basics of probabilistic inference. Then we had what Phil Marshall calls a “stand-up”, at which every participant introduced her or himself, said what it was they wanted to learn, and said what it was they knew well and could help with. They also said what they wanted to do or produce, if there was a well-defined plan.

In the stand-up and early in the hack session, Adrian Price-Whelan (Princeton) talked about joining matched sequential colormaps into diverging colormaps, with one option (or, really, style) emphasizing values near zero, and one emphasizing values far from zero. He had immediate success, and showed some nice results. One amusing thing that might bear fruit later in the week is that the author of the (currently ascendent) Viridis colormap is apparently owner of one of the BIDS desks in our vicinity this week. The conditions on a colormap are many and in tension: There are b/w printer issues, colorblindness issues, there are no-saturate-to-white and black issues, there are small-scale resolution issues, and etc.

I started (perhaps foolishly) two hacks. The first, which I started with Dalya Baron (TAU) and Matt Mechtley (ASU), was to make the demonstration I have of low-photon-rate, direct molecular imaging much more realistic. My demo, which I mention here, is an extreme toy, and there are many directions to make it less toy-like, and improve the internal engineering. I spent time with Baron and Mechtley getting them up to speed on what works and what doesn't, and what needs to change. The easiest change to make first is to go from one-dimensional angle sampling to full three-dimensional sampling in Euler angles (or, equivalently, projection matrices).

My second hack is to somehow, some way, build an MCMC sampler that can successfully and believably sample from all the modes in the multimodal posterior pdfs that we get in standard radial-velocity fitting problems (think: finding exoplanets and binary stars by measuring precise radial velocities). When the observations are sparse, the number of qualitatively different orbital solutions is large, and no sampler that I know of convincingly samples them all. Very late in the day, over coffee, Adrian Price-Whelan, Dan Foreman-Mackey (UW), and I had a very good idea: Sample exactly in a linear problem (mixture of sinusoids) that can be sampled more-or-less analytically, transform those samples into samples at the nearest points in the parameter space of the orbit-fitting problem, and then use importance sampling to get a provably correct (in the limit) sampling from the true posterior pdf. We have a plan for tomorrow!

2016-08-26

MCMC tutorial and stellar ages

I spent my research time today working on finishing the instruction manual on MCMC that Foreman-Mackey and I are writing as part of my Data Analysis Recipes series. Our goal is to get this posted to arXiv this week. I enjoy writing but learned (once again) in the re-reading that I am not properly critical of my own prose. It really takes an outsider—or a long break—to see what needs to be fixed. The long-term preservation of science is in the hands of scientists, so the writing we do is important! Anyway, enough philosophy; this is pedagogy, not research: I am trying to make this document the most useful thing it can be. I also read and commented on a big new paper by Anna Y. Q. Ho on red giant masses and ages measured in the LAMOST project. Her paper includes an incredible map of the stellar ages as a function of position on the sky, and the different components of the Galaxy are obvious!

2016-08-25

new space!

In a low-research day, I got my first view of the new location of the NYU Center for Data Science, in the newly renovated building at 60 Fifth Ave. The space is a mix of permanent, hoteling, and studio space for faculty, researchers, staff, and students, designed to meet very diverse needs and wants. It is cool! I also discussed briefly with Daniela Huppenkothen (NYU) the scope of her first paper on the states of GRS 1915, the black-hole source with extremely complex x-ray timing characteristics.

2016-08-24

spectral signatures of convection; photo-zs without training data

I have spent part of the summer working with Megan Bedell (Chicago) to see if there is any evidence that radial velocity measurements with the HARPS instrument might be being affected by calibration issues or helped by taking some kind of hierarchical approach to calibration. We weren't building that hierarchical model, we were looking to see if there is evidence in the residuals for information that a hierarchical model could latch on to. We found nothing, to my surprise. I think this means that the HARPS pipelines are absolutely awesome. I think they are closed-source, so we can't do much but inspect the output.

Given this, we decided to start looking at stellar diagnostics—if it isn't the instrument calibration, then maybe it is actually the star itself: We need to ask whether we can we see spectral signatures that predict radial velocity. This is a very general causal formulation of the problem: We do not expect that a star's spectrum will vary with the phase of an exoplanet's orbit (unless it is a very hot planet!), so if anything about the spectrum predicts the radial velocity, we have something to latch on to. The idea is that we might see the spectral signature of hot up-welling or cold down-welling at the stellar surface. There is much work in this area, but I am not sure than anyone has done anything truly data driven (in the style, for example, of The Cannon). We discussed first steps towards doing that, with Bedell assigned plotting tasks, and me writing down some methodological ideas.

Over lunch, Boris Leistedt and I caught up on all the various projects we like to discuss. He has had the breakthrough that—if you build a proper generative model for galaxy imaging data—you don't need to have spectroscopic training sets, nor good galaxy spectral models, to get good photometric redshifts. The idea is that once you have multi-band photometry, you can predict the appearance of any observed galaxy as it would appear any other redshift using a flexible, non-parametric SED model that isn't tied to any physical galaxy model. The idea is that we use all of, but only, what we believe about how the redshift works, physically. Most machine-learning methods aren't required to get the redshift physics right, and most template-based models assume lots of auxilliary things about stars and stellar populations and dust. We also realized that, if done correctly, this method could subsume into itself the cross-correlation redshifts that the LSST project is excited about.

2016-08-22

the best image differencing ever

I had the pleasure today of reading two draft papers, one by Dun Wang on our alternative to difference imaging based on our data-driven pixel-level model of the Kepler K2 data, and the other by Huanian Zhang (Arizona) on H-alpha emission from the outskirts of distant galaxies. Wang's paper shows (what I believe to be) the most precise image differences ever created. Of course we had amazing data to start with! But his method for image differencing is unusual; it doesn't require any model of either PSF nor the difference between them. It just empirically figures out what linear combinations of pixels in the target image predict each pixel in the target image, using the other images to determine these predictor combinations. It works very well and has been used to find microlensing events in the K2C9 data, but it has the disadvantage that it needs to run on a variability campaign; it can't be run on just two images.

The Zhang paper uses enormous numbers of galaxy-spectrum pairs in the SDSS spectroscopic samples to find H-alpha emission from the outskirts of (or—more precisely—angularly correlated with) nearby galaxies. He detects a signal! And it is 30 times fainter than any previous upper limit. So it is big news, I think, and has implications for the radiation environments of galaxies in the nearby Universe.

2016-08-19

halo occupation and assembly bias

My research highlight today was a conversation with MJ Vakili about the paper he wrote this summer about halo occupation and what's known as “assembly bias”. Perhaps the most remarkable thing about contemporary cosmology is that the dark-matter-only simulations do a great job of explaining the large-scale structure in the galaxy distribution, despite the fact that we don't understand galaxy formation! The connection is a “halo occupation function” that puts galaxies into halos. It turns out that incredibly simple prescriptions work.

I have always been suspicious about halo occupation, because galaxy halos are not fundamental objects in gravity or cosmology; they are defined by a prescription, running on the output of a simulation. That is, they are just effective crutches, used for convenience. There was no reason to put any reality onto a halo (or a sub-halo or anything of the sort). Really there is just a density field! However, empirically, the halo description of the Universe has been both easy and useful.

Now that cosmology is seeking ever higher precision, work has started along the lines of asking what halo properties (mass, velocity amplitude, concentration, and so on) are relevant to the galaxies that form within them. The answer from the data seems to be that mass is the main driving factor. The community has expected a bias or occupation that depends on the time of formation of the halo (which itself relates to he halo concentration parameter). Vakili has been testing this, and the main punchline is that if the effect is there, it is a small one! It is a great result and he is nearly ready to submit.

My question is: Can we step out of the halo box and consider all the ways we might put galaxies into the dark-matter field? Could the data tell us what is most relevant?

2016-08-18

dynamics of M33

In a day short of research (because: getting ready to teach again!), I spent some time working with the Simons Foundation to prepare for the #GaiaSprint, which is coming up in 8 weeks. After that I had lunch with Ekta Patel (Arizona), who has been working on the dynamics of the Local Group, and especially understanding the orbits of M31 and M33.

2016-08-17

probabilistic redshifts

In the morning I had a long and overdue conversation with Alex Malz, who is attempting to determine galaxy one-point statistics given probabilistic photometric redshift information. That is, each galaxy (as in, say, the LSST plan and some SDSS outputs) is given a posterior probability over redshifts rather than a strict redshift determination. How are these responsibly used? It turns out that the answer is not trivial: They have to be incorporated into a hierarchical inference, in which the (often implicit) interim priors used to make the p(z) outputs is replaced by a model for the distribution of galaxies. That requires (a) mathematics of probability, and (b) knowing the interim priors. One big piece of advice or warning we have for current and future surveys is: Don't produce probabilistic redshifts unless you can produce the exact priors too! Some photometric redshift schemes don't even really know what their priors are, and this is death.

In the afternoon, I discussed various projects with John Moustakas (Siena), around Gaia and large galaxies. He mentioned that he is creating a diameter limited catalog and atlas of galaxies. I am very interested in this, but we had to part ways before discussing further.

2016-08-16

Moustakas

Coming back in from a short vacation, it was a low research day. John Moustakas (Siena) is in town this week, and we discussed the state of some of his projects. In particular, we discussed Guangtun Zhu's paper on discrete optimization for making archetype sets, and the awesomeness of that tool, which Moustakas and I intend to use in various galaxy contexts.

2016-08-14

hierarchical model for the redshift distribution

On the airplane home from MPIA (boo hoo!) I wrote the shortest piece of code I could that can take interim posterior p(z) redshift probability distributions from a set of galaxies and produce N(z) (and maybe other one-point statistics). I can make pathological cases in which there are terrible photometric-redshift outliers that are structured to cause havoc for N(z). But as long as you have a good generative model (and that is a big ask, I hate to admit), and as long as the providers of the p(z) information also provide the effective prior on z that was used to generate the p(z)s (another big ask, apparently), you can infer the true N(z) surprisingly accurately. This is work with Alex Malz and Boris Leistedt.

2016-08-12

stars have simple spectra! The Cannon

Christina Eilers (MPIA) and I spent a long time today pair-coding her extension to The Cannon in which we marginalize over the true labels of the training data, under the assumption of small, known, Gaussian, label noise. Our job was to vastly speed optimization by getting correct derivatives (gradient) of the objective function (a likelihood function) with respect to parameters, and insertion of this into a proper optimizer. We built tests, did some experimental coding, and then fully succeeded! Eilers's Cannon is slower than other implementations, but more scientifically conservative. We showed by the end of the day that the model becomes a better fit to the data as the label variances are made realistic. Stars really do have simple spectra!

While we were working, Anna Y. Q. Ho and Sven Buder (MPIA) were discovering non-trivial covariances between stellar radial Velocity (or possibly radial velocity mis-estimation) and alpha abundances, with Ho working in LAMOST data and Buder working in GALAH data. Both are using The Cannon. After some investigation, we think the issue is probably related to the collision of alpha-estimating spectral features and ISM and telluric features. We discussed methods for mitigation, which range from censoring data at one end and fully modeling velocity along with the model parameters at the other.

Late in the day, I finished my response to referee and submitted it.

2016-08-11

near-field cosmology in far-field galaxies

At Galaxy Coffee, Ben Weiner (Arizona) gave a talk about his great project (with many collaborators) to study very faint satellites around Milky-Way-like galaxies using overwhelming force: They are taking spectra of everything within the projected virial radius! That's thousands of targets, among which (for a typical parent galaxy), they find a handful of satellites. The punchline is that the Milky Way appears to be typical in its number of satellites, though there is certainly a range.

I spoke with Glenn Van de Ven (MPIA) about the possibility that he could upgrade his state-of-the-art Schwarzschild modeling of external-galaxy integral field data to something that would do chemo-dynamics. I suggested ways that he could keep the problem convex, but use regularization to reduce model complexity. We discussed baby steps towards the goal.

I also wrote a title and abstract (paper scope) for Adrian Price-Whelan and started on the same for Andy Casey.

2016-08-10

stars

As always, MPIA Milky Way group meeting was a pleasure today, featuring short discussions led by Nicholas Martin (Strasbourg), Adrian Price-Whelan, and Andy Casey. Casey showed his developments on The Cannon and applications to new surveys. Price-Whelan spoke about our ability to see possible warps (coherent responses) in the Milky Way disk from interactions with satellites. Martin showed amazing color-magnitude diagrams of stars in Andromeda satellite galaxies. So. Much. Detail.

Chaos reigned around me. Jonathan Bird and Melissa Ness worked on the Disco concept. Anna Y. Q. Ho, working on a suggestion from Casey, found Li lines in (a rare subsample of) LAMOST giants, leading to a whole new insta-project on Li. Price-Whelan figured out multiple methods for initializing and running MCMC on our single-line binary stars, initializing from either the prior or from literature orbits. It looks like many (or maybe all) of the APOGEE variable-velocity stars have multiple qualitatively different but nonetheless plausible orbital solutions. Casey and I conceived of a totally new way to build The Cannon as a local model for every test-step object; a non-parametric Cannon if you wish? I spoke with Jeroen Bouwman (MPIA) about his (very promising) work using Dun Wang's Causal Pixel Model to fit the Spitzer data on transit spectroscopy for a hot Jupiter.

2016-08-09

ages and companions of stars

Anna Y. Q. Ho is in town to finish two—yes, two—papers on what can be learned about stellar properties from (relatively) low-resolution LAMOST spectroscopy. She has amazing results on ages and chemical abundances, which challenge long-held beliefs about what can be done at medium to low resolution. One of her two papers is about using C and N abundances to infer red-giant ages, as we did with APOGEE and The Cannon earlier. Ho and I met with Rix today to discuss error propagation from abundances to ages, and all the possible sources of scatter, including the unknown unknowns.

Adrian Price-Whelan started running our probabilistic inference of single-line spectroscopic binaries on the Troup et al sample. We had to complexify our noise model, since clearly there are variations larger than the error bars. We also had to reparameterize our binary-star parameters to a better set. In this process, we wanted to go from a phase angle to a time and back. Going from time to phase angle is a numerically stable mod() operation. Going from phase angle back to time can naively involve adding and subtracting huge numbers. We re-cast the function so no large subtractions ever happen. That was not totally trivial!

Late in the day, Melissa Ness and Jonathan Bird interviewed Price-Whelan about ideas potentially going into the nascent Disco proposal.

2016-08-08

modeling binaries

All hell broke loose in Heidelberg today, as Andy Casey got done with his meeting downtown, Jonathan Bird (Vanderbilt) showed up to work on the Disco proposal for the next big thing with the SDSS hardware, Ben Weiner (Arizona) showed up to talk science, and Anna Ho came in to finish her new set of papers about the LAMOST data. And even with these distractions, Price-Whelan and I “decided” (I use scare quotes because our decision was heavily influenced by Rix!) to work on the single-line binaries in the APOGEE data.

Price-Whelan and I joined up my celestial mechanics code from June with the simulated APOGEE single-visit velocities through a likelihood function and got MCMC sampling working. We showed that you can say significant things about binary stars even with only a few observations; you don't need full coverage of the orbit to make substantial statements. Though it sure helps if you want very specific orbital parameters! Tomorrow we will hit real data; we will have to put in a noise model and some outlier modeling (probably).

Bird and I discussed the high-level point of the Disco proposal: We need it to express, clearly, an idea (or set of ideas) that is worth many tens of millions of dollars. That's hard; the project is very valuable and will have huge impact per dollar, but crystallizing a complex project into one bullet point is never trivial.

2016-08-07

ready to resubmit

I worked on the weekend to get my “Chemical tagging can work” paper ready for resubmission to the ApJ, incorporating referee and co-author comments, both of which made the paper much better. By Sunday it was good enough to send to the co-authors for final comments. In case it is some comfort to my loyal reader, it took me a full six months to get to this, which is embarrassing, but normal. And even then—when I sent it to the co-authors—it was missing a paragraph about the abundances in cluster M5. While Andy Casey and I were relaxing in a Heidelberg pub, he (Casey) wrote that final paragraph. I love my job!

2016-08-04

birthday paradox for stellar births

Yesterday at Milky Way group meeting, Adrian Price-Whelan brought up the possibility that the halo might be made up of many disrupted globular clusters. Sarah Martell (UNSW) showed up today and said more along these lines, based on chemical arguments. That got me thinking about the birthday paradox: If you have 30 people in the room, you are more than likely to have two that share the same birthday. The implication of this paradox for the Galaxy is the following:

Imagine that the Milky Way halo (or even better, bulge) is made up of 1000 disrupted stellar clusters that fell in. If we look at even 100-ish stars, we would expect to find pairs of stars with identical abundances, with very good confidence. And this confidence can be kept high even if there is a smooth background of stars that doesn't participate in the cluster origin, and even if there are multiple populations in the original clusters. As long as we can show that pairs of stars are not co-eval (chemically), we can rule out all of these hypotheses with far less data than we already have, in hand. Awesome! I wrote code to check this, but am far from having a real-data test.

2016-08-03

impromptu hack day

Andy Casey had the afternoon off from #FirstStarsV; that and the presence of Adrian Price-Whelan inspired me to suggest that we structure the afternoon like a hack day, in an undisclosed garden location in the Heidelberger Neuenheim. We were joined by Christina Eilers (MPIA), Melissa Ness (MPIA), Hans-Walter Rix, Branimir Sesar (MPIA), and Gail Zasowski (JHU). Various projects were pitched and executed. My own work was on my response-to-referee (boring I know!) and helping Eilers with coding up the objective function and derivatives for a version of The Cannon that permits the inclusion of stars with noisy and missing labels at training time. Casey worked on building giant-branch and main-sequence Cannon models and mixing them or switching between them. It appears to work amazingly well.

In the morning before that, MPIA Milky Way group meeting hosted a discussion by Price-Whelan of the possibility of understanding what original population of globular clusters was ground up and stripped into the present-day Milky-Way halo, and a discussion by Andy Casey of an amazingly low metallicity, amazingly rapidly moving star, that appears to have just fallen in from somewhere. These led to excited discussions, and, indeed, framed some of the projects performed at the above-mentioned hack day. For example, at the hack day, Price-Whelan made predictions for other stars that might be part of whatever cluster, group, or galaxy fell in with Casey's crazy star.

2016-08-02

projects with The Cannon

I got troubled this morning by the so many projects problem! In the subdomain of my life that is about modeling spectra of stars, and within that the subdomain that is thinking about APOGEE data, there are these, which I don't know how to prioritize!

  • Fit for velocity widths and velocity offsets (redshifts) simultaneously with the star labels, to remove projections of velocity errors and line-spread-function (or microturbulence) variations onto parameters of interest.
  • Fit stars as linear combinations of stars at different velocities to find the double-lined spectroscopic binaries. Combine this with Kepler data to get the full properties of eclipsing binaries. We have many examples, and I expect we will find many more! We might put Adrian Price-Whelan onto parts of this this week.
  • Build (train) models for all parts of the H-R diagram, especially the subgiant and dwarf parts, where we have never produced good models. These are particularly important in the era of Gaia. We might convince Andy Casey to do some of this this week, and Sven Buder (MPIA) is also doing some of this in GALAH.
  • Project residuals onto (theoretically determined) derivatives with respect to element abundances, to get or check element abundances. This might also be used to build an element-abundance measuring system that doesn't require a full training set of abundances that we believe. Yuan-Sen Ting (UCSC) is producing the relevant derivatives right now.
  • Marginalize out noisy labels at training time, and marginalize out noisy internal parameters at test time. We have Christina Eilers (MPIA) on that one right now.
  • Look at going fully probabilistic, where we get posteriors over all labels and all internal parameters. I owe Jonathan Weare (Chicago) elements for this.
  • Include photometry into the training and test data to break the temperature–gravity degeneracies. And maybe also extinction! This is easy to do and ought to have a big impact.
  • Include priors on stellar structure and evolution to prevent results from departing from physically reasonable solutions. This is anathema to the stellar spectroscopy world (or most of it), but much desired by the customers of stellar parameter pipelines!
  • Add in latent variables to capture variations in stellar spectra not captured by the quadratic-on-labels model. Are the learned latent variables interpretable?

2016-08-01

inference of bolometric fluxes

Rix and I discussed many data analysis problems in the morning today. We have been discussing the possibility of measuring the bolometric fluxes of stars with very little (possibly vanishing) dependence on spectral assumptions. (The idea is: If you have enough bands, the spectrum or SED is very strongly tied down.) If we can combine these with other kinds of measurements (of, say, effective temperature), we can make predictions for interferometry without (heavy use of) stellar models! One constant that comes up in these discussions is 4πGσ (the gravitational constant times the Stefan-Boltzmann constant (have I mentioned that I hate it when constants are named after people?)). Is this a new fundamental, astronomical constant?

We also discussed with Coryn Bailer-Jones (MPIA), Morgan Fouesneau (MPIA), and Rene Andre (MPIA) actually putting the bolometric-flux project into practice with real data. Fouesneau and Andre seem to have working code!

I also had the pleasure of reading and giving comments on some writing Bailer-Jones has been doing for a possible new textbook on computational data analysis. This is exciting!