2019-01-31

THE meeting, day 1

Today was the first day of the Terra Hunting Experiment collaboration meeting. This project is to use HARPS3 for a decade to find Earth-like planets around Sun-like stars. The conversation today was almost entirely about engineering and hardware, which I loved, of course! Many things happened, too many to describe here. One of the themes of the conversation, both in session and out, is that these ultra-precise experiments are truly integrated hardware–software systems. That is, there are deep interactions between hardware and software, and you can't optimally design the hardware without knowing what the software is capable of, and vice versa.

One presentation at the meeting that impressed me deeply was by Richard Hall (Cambridge), who has an experiment to illuminate CCD detectors with a fringe pattern from an interferometer. By sweeping the fringe pattern across the CCD and looking at residuals, he can extremely precisely measure the effective centroid in device coordinates of every pixel center. That is impressive, and it is now known to be one of the leading systematics in extreme precision radial velocity. That is, we can't just assume that the pixels are on a perfect, regular, rectangular grid. I also worked out (roughly) a way that he could do this mapping with the science data, on sky! That is, we could self-calibrate the sub-pixel shifts. This is highly related to things Dustin Lang (Perimeter) and I did for our white paper about post-wheel Kepler.

2019-01-29

dark matter as a latent-variable field

It was a light research day today! But I did get in a conversation with Ana Bonaca (Harvard) about dark matter and information. She has written a nice paper on what information cold stellar streams bring about the gravitational force field or potential in static models of the Galaxy. We have a bit of work on what information perturbed streams (perturbed by a compact dark-matter substructure) bring. We have ideas about how to think about information in the time-dependent case. And on a separate thread, Bonaca has been thinking about what information the stellar kinematics and chemistry bring.

In some sense, the dark matter is the ultimate latent-variable model: The observations interact with the dark matter very weakly (and I'm using “weak” in the non-physics sense), but ultimately everything is driven by the dark matter. A big part of contemporary astrophysics can be seen in this way: We are trying to infer as many properties of this latent-variable field as we can. Because of this structure, I very much like thinking about it all in an information-theoretic way.

2019-01-28

dark photons, resonant planets

At the CCPP Brown-Bag, Josh Ruderman (NYU) spoke about dark photons with a tiny coupling to real photons. The idea (roughly) is that the dark matter produces (by decays or transitions) dark photons, and these have a small coupling to real photons, but importantly the dark photons are not massless. He then showed that there is a huge open parameter space for such models (that is, models not yet ruled out by any observations) that could nonetheless strongly distort the cosmic background radiation at long wavelengths. And, indeed, there are claims of excesses at long wavelengths. So this is an interesting angle for a complex dark sector. My loyal reader knows I am a fan of having complexity in the dark sector.

In the afternoon, I met up with Anu Raghunathan (NYU) to discuss possible starter projects. I pitched a project to look at our ability to find (statistically) exoplanet-transit-like signals in data. I want to understand in detail how much more sensitive we could be to transiting exoplanets in resonant chains than we would be to the individual planets treated independently. There must be a boost here, but I don't know what it is yet.

2019-01-26

SCIMMA workshop, day 2

I officially don't go to meetings on the weekend! That said, I did go to day 2 of a workshop on multi-messenger astrophysics (and, in particular, the field's computing and information infrastructure needs) at Columbia University today. A lot happened, and there were even some fireworks, because there are definitely disagreements among the physicists, the computer scientists, the information scientists, and the high-performance computing experts about what is important, what is hard, and what is in whose domain! I learned a huge amount today, but here are two highlights:

In its current plan (laid out at the meeting by Mario Juric of UW), the LSST project officially doesn't do any scientific analyses; it is only a data source. In this way it is like ESA Gaia. It is trying to do a lot of social engineering to make sure the community organizes good data-analysis and science efforts around the LSST data outputs and APIs. Famously and importantly, it will produce hundreds of thousands to millions of alerts per night, and a lot of the interest is in how to interact with this firehose, especially in multi-messenger, where important things can happen in the first seconds of an astrophysical event.

During Juric's talk, I realized that in order for us to optimally benefit from LSST, we need to know, in advance, where LSST is pointing. Everyone agreed that this will happen (that is, that this feed will exist), and that (relative to the alerts stream) it is a trivial amount of data. I hope this is true. It's important! Because if you are looking for things that happen on the sky, you learn more if you happen to find one that happens inside the LSST field while LSST is looking at it. So maybe looking under the lamp-post is a good idea!

The LCOGT project was represented by Andy Howell (LCOGT). He talked about what they have learned in operating a heterogeneous, global network of telescopes with diverse science goals. He had various excellent insights. One is that scheduling requires very good specification of objectives and good engineering. Another is that openness is critical, and most break-downs are break-downs of communication. Another is that there are ways to structure things to reward generosity among the players. And so on. He talked about LCOGT but he is clearly thinking forward to a future in which networks become extremely heterogeneous and involve many players who do not necessarily all trust one another. That's an interesting limit!

2019-01-25

deep generative models and inference

In my group meeting today, we had a bit of a conversation about deep generative models, inspired by my trip to Math+X Houston. The Villar talk reminded me of the immense opportunity that exists for using deep generative models (like GANs) as part of a bigger inference. I pitched some of my ideas at group meeting.

2019-01-24

Math+X Houston, day 2

Today was day 2 of the 2019 Math+X Symposium on Inverse Problems and Deep Learning in Space Exploration at Rice University in Houston. Again I saw and learned way too much to write in a blog post. Here are some random things:

In a talk about provably or conjectorally effective tricks for optimization, Stan Osher (UCLA) showed some really strange results, like that an operator that (pretty much arbitrarily) smooths the derivatives improves optimization. And the smoothing is in a space where there is no metric or sense of adjacency, so the result is super-weird. But the main takeaway from his talk for me was that we should be doing what he calls “Nesterov” when we do gradient descent. It is like adding in some inertia or momentum to the descent. That wasn't his point! But it was very useful for me.

There was a great talk by Soledad Villar (NYU), who showed some really nice uses of deep generative models (in the form of a GAN, but it could be anything) to de-noise data. This, for a mathematician, is like inference for an astronomer: The GAN (or equivalent) trained on data becomes a prior over new data. This connects strongly to things I have been trying to get started with Gaia data and weak-lensing data! I resolved to find Villar back in NYC in February. She also showed some nice results on constructing continuous deep-learning methods, which don't need to work in a discrete data space. I feel like this might connect to non-parametrics.

In the side action at the meeting, I had some valuable discussions. One of the most interesting was with Katherine de Kleer (Caltech), who has lots of interesting data on Io. She has mapped the surface using occultation, but also just has lots of near-infrared adaptive-optics imaging. She needs to find the volcanoes, and currently does so using human input. We discussed what it would take to replace the humans with a physically motivated generative model. By the way (I learned from de Kleer): The volcanoes are powered by tidal heating, and that heating comes from Io's eccentricity, which is 0.004. Seriously you can tidally heat a moon to continuous volcanism with an eccentricity of 0.004. Crazy Solar System we live in!

In the afternoon, Rob Fergus (NYU) talked about the work we have done on exoplanet direct detection with generative models. And he has done the same (more-or-less repeated our results but with Muandet and Schölkopf) with discriminative models too. That's interesting, because discriminative models are rarely used (or rarely power-used) in astronomy.

2019-01-23

Math+X Houston, day 1

Today was the first day of the 2019 Math+X Symposium on Inverse Problems and Deep Learning in Space Exploration, which is a meeting to bring together mathematicians and domain scientists to discuss problems of mutual interest. I learned a huge amount today! I can't summarize the whole day, so here are just a few things ringing in my brain afterwards:

Sara Seager (MIT) and I both talked about how machine learning helps us in astrophysics. She focused more on using machine learning to speed computation or interpolate or emulate expensive atmospheric retrieval models for exoplanet atmospheres. I focused more on the use of machine learning to model nuisances or structured noise or foregrounds or backgrounds in complex data (focusing on stars).

Taco Cohen (Amsterdam) showed a theory of how to make fully, locally gauge-invariant (what I would call “coordinate free”) deep-learning models. And he gave some examples. Although he implied that the continuous versions of these models are very expensive and impractical, the discretized versions might have great applications in the physical sciences, which we believe truly are gauge-invariant! In some sense he has built a superset of all physical laws. I'd be interested in applying these to things like CMB and 21-cm foregrounds.

Jitendra Malik (Berkeley) gave a nice talk about generative models moving beyond GANs, where he is concerned (like me) with what's called “mode collapse” or the problem that the generator can beat the discriminator without making data that are fully representative of all kinds of real data. He even name-checked the birthday paradox (my favorite of the statistical paradoxes!) as a method for identifying mode collapse. Afterwards Kyle Cranmer (NYU) and I discussed with Malik and various others the possibility that deep generative models could possibly play a role in implicit or likelihood-free inference.

There were many other amazing results, including finding seismic pre-cursors to landslides (Seydoux) and using deep models to control adaptive optics (Nousianinen) and analyses of why deep learning models (which have unimaginable capacity) aren't insanely over-fitting (Zdeborová). On that last point the short answer is: No-one knows! But it is really solidly true. My intuition is that it has something to do with the differences between being convex in the parameter space and being convex in the data space. Not that I'm saying anything is either of those!

2019-01-22

inferring maps; detecting waves

I had a great conversation this morning with Yashar Hezaveh (Flatiron) about problems in gravitational lensing. The lensing map is in principle a linear thing (once you set the nonlinear lensing parameters) which means that it is possible to marginalize out the source plane analytically, in principle, or to apply interesting sparseness or compactness priors. We discussed computational data-analysis ideas.

Prior to that, I had an interesting conversation with Rodrigo Luger (Flatiron) and Dan Foreman-Mackey (Flatiron) about the priors we use when we do stellar-surface or exoplanet-surface modeling (map-making). Most priors that are easy to use enforce smoothness, but our maps might have sharp features (coastlines!). But we more-or-less agreed that more interesting priors are also less computationally tractable. Duh!

At mid-day, Chiara Mingarelli (Flatiron) argued in a great seminar that pulsar timing will make a detection in the next years. Her argument is conservative, so I am very optimistic about this.

2019-01-21

planets around hot stars

My research highlight for the day was a conversation with Ben Pope (NYU) about projects involving hot stars. We have been kicking around various projects and we realized in the call that they really assemble into a whole research program that is both deep and broad:

There are problems related to finding transiting planets around hot stars, which is maybe getting less attention than it should, in part because there are technical challenges (that I think we know how to overcome). And planets found around hot stars might have very good properties for follow-up observations (like transit spectroscopy, for example, and reflected light), and also good prospects for harboring life! (Okay I said it.)

There are problems related to getting stellar ages: Hot stars have lifetimes and evolutionary changes on the same timescales as we think exoplanetary systems evolve dynamically, so there should be great empirical results available here. And hot stars can have reasonable age determinations from rotation periods and post-main-sequence evolution. And we know how to make those age determinations.

And: The hot-star category includes large classes of time-variable, chemically peculiar stars. We now at Flatiron (thanks to Will Farr and Rodrigo Luger) have excellent technology for modeling spectral surface features and variability. These surface maps have the potential to be extremely interesting from a stellar model perspective.

Add to all this the fact that NASA TESS will deliver outrageous numbers of light curves, and spectroscopic facilities and surveys abound. We have a big, rich research program to execute.

2019-01-18

Dr Lukas Henrich

It was an honor and a privilege to serve on the PhD defense committee of Lukas Heinrich (NYU), who has had a huge impact on how particle physicists do data analysis. For one, he has designed and built a system that permits re-use of intermediate data results from the ATLAS experiment in new data analyses, measurements, and searches for new physics. For another, he has figured out how to preserve data analyses and workflows in a reproducible framework using containers. For yet another, he has been central in convincing the ATLAS experiment and CERN more generally to adopt standards for the registration and preservation of data analysis components. And if that's not all, he has structured this so that data analyses can be expressed as modular graphs and modified and re-executed.

I'm not worthy! But in addition to all this, Heinrich is a great example of the idea (that I like to say) that principled data analysis lies at the intersection of theory and hardware: His work on ruling out supersymmetric models using ATLAS data requires a mixture of theoretical and engineering skills and knowledge that he has nailed.

The day was a pleasure, and that isn't just the champagne talking. Congratulations Dr. Heinrich!

2019-01-17

taking the Fourier transform of a triangle?

As my loyal reader knows, Kate Storey-Fisher (NYU) and I are looking at the Landy–Szalay estimator for the correlation function along a number of different axes. Along one axis, we are extending it to estimate a correlation function that is a smooth function of parameters rather than just in hard-edged bins. Along another, we are asking why the correlation function is so slow to compute when the power spectrum is so fast (and they are equivalent!). And along another, we are consulting with Alex Barnett (Flatiron) on the subject of whether we can estimate a correlation function without having a random catalog (which is, typically, 100 times larger than the data, and thus dominates all compute time).

Of course when you get a mathematician involved, strange things often happen. One thing is that Barnett has figured out that the Landy–Szalay estimator appears in literally no other literature other than cosmology! And of course even in cosmology it is only justified in the limit of near-zero, Gaussian fluctuations. That isn't the limit of most correlation-function work being done these days. In the math literature they have different estimators. It's clear we need to build a testbed to check the estimators in realistic situations.

One thing that came up in our discussion with Barnett is that it looks like we don't ever need to make a random catalog! The role that the random catalog plays in the estimation could be played (for many possible estimators) by an auto-correlation of the survey window with itself, which in turn is a trivial function of the Fourier transform of the window function. So instead of delivering, with a survey, a random catalog, we could perhaps just be delivering the Fourier transform of the window function out to some wave number k. That's a strange point!

In the discussion, I thought we might actually have an analytic expression for the Fourier transform of the window function, but I was wrong: It turns out that there aren't analytic expressions for the Fourier transforms of many functions, and in particular the Fourier transform of the characteristic function of a triangle (the function that is unity inside the triangle and zero outside) doesn't have a known form. I was surprised by that.

2019-01-16

MySpace, tagging

Wednesdays at Flatiron are pretty fun! Today Kathryn Johnston (Columbia) convened a discussion of dynamics that was pretty impressive. In that discussion my only contribution was to briefly describe the project that Adrian Price-Whelan (Princeton) and I have started called MySpace. This project is to find (in a data-driven way) the local transformation of velocity as a function of position that makes the local disk velocity structure more coherent over a larger patch of the disk.

At first we thought we were playing around with this idea, but then we realized that it produces an unsupervised, data-driven classification of all the stars: Stars in velocity-space concentrations locally in the disk are either in concentrations that extend in some continuous way over a larger patch of the disk or they do not. And this ties into the origins of the velocity substructure. While I was talking about this, Robyn Sanderson (Flatiron) pointed out that if the substructure is created by resonances or forcing by the bar, there are specific predictions of how the local transformation should look. That's great, because it is a data-driven way of looking at the Milky Way bar. Sanderson also gave us relevant references in the literature.

Late in the day, I wrote down some ideas about how we might tease apart metallicity, age, and kinematics in local samples of stars. The sample of Solar Twins from Megan Bedell (Flatiron) have empirical properties that suggest that a lot of the chemical (element-abundance-ratio) diversity of the stars is strongly related to stellar age. So is there information left for chemical taggging? Maybe not. I tried to write down a framework for asking these questions.

2019-01-15

Dr Kilian Walsh

One of the great pleasures of my job is being involved in the bestowal of PhDs! Today, Kilian Walsh (NYU) defended his PhD, which he did with Jeremy Tinker (NYU). The thesis was about the connections between galaxies and their halos. As my loyal reader knows, I find it amazing that this description of the world works at all, but it works incredibly well.

One of the puzzling results from Walsh's work is that although halos themselves have detailed properties that depend on how, where, and when they assembled their mass, the properties of the galaxies that they contain don't seem to depend on any halo property except the mass itself! So the halos have (say) spin parameters that depend on assembly time, the galaxies don't seem to have properties that depend on halo spin parameter! Or if they do, it's pretty subtle. This subject is called halo assembly bias and galaxy assembly bias; there is plenty of the former and none of the latter. Odd.

Of course the tools used for this are blunt tools, because we don't get to see the halos! But Walsh's work has been about sharpening those tools. (I could joke that he sharpens them from extremely blunt to very blunt!) For example, he figured out how to use the void probability function in combination with clustering to put stronger constraints on halo occupation models.

Congratulations Dr. Walsh!

2019-01-14

out sick

I was out sick today. It was bad, because I was supposed to give the kick-off talk at Novel Ideas for Dark Matter in Princeton.

2019-01-11

what's permitted for target selection?

Because I have been working with Rix (MPIA) to help the new project SDSS-V make plans to choose spectroscopic targets, and also because of work I have been doing with Bedell (Flatiron) on thinking about planning radial-velocity follow-up observations, I find myself saying certain things over and over again about how we are permitted to choose targets if we want it to be easy (and even more importantly, possible) to use the data in a statistical project that, say, determines the population of stars or planets, or, say, measures the structural properties of the Milky Way disk. Whenever I am saying things over and over again, and I don't have a paper to point to, that suggests we write one. So I started conceiving today a paper about selection functions in general, and what you gain and lose by making them more complicated in various ways. And what is not allowed, ever!

2019-01-10

#hackAAS at #aas233

Today was the AAS Hack Together Day, sponsored by the National Science Foundation and by the Moore Foundation, both of which have been very supportive of the research I have done, and both of which are thinking outside the box about how we raise the next generation of scientists! We had a huge group and lots happened. If you want to get a sense of the range and scope of the projects, look at these telegraphic wrap-up slides, which (as always) only paint a partial picture!

We were very fortunate to have Huppenkothen (UW) in the room, and in (literally) five minutes before we started, she put together these slides about hack days. I love that! I think Huppenkothen is the world ambassador and chief philosopher of hacking.

I worked on two hacks. Well really one. The one I didn't really work on was to launch a Mastodon instance. Mastodon is the open-source alternative to Twitter(tm) and has nice features like content warnings (on which you can filter) and community-governable rules and restrictions. I thought it might be fun to try to compete with the big players in social! Although I didn't work on it at all, Dino Bektešević (UW) took over the project and (with a lot of hacking) got it up and running on an AWS instance. It took some hacking because (like many open-source projects) the documentation and tutorials were out of date and filled with version (and other) inconsistencies. But Bektešević (and I by extension) learned a lot!

The hack I actually did (a very tiny, tiny bit of) work on was to write a stellar-binaries-themed science white paper for the Decadal Survey. Katie Breivik (CITA) and Adrian Price-Whelan (Princeton) are leading it. Get in touch with us if you want to help! The point is: Binary stars are a critical part of every science theme for the next decade.

2019-01-09

#AAS233, day 3

I arrived today at #AAS233. I'm here mainly for the Hack Together Day (which is tomorrow), but I did go to some exoplanet talks. One nice example was Molly Kosiarek (UCSC) who talked about a small planet in some K2 data. She fit Gaussian Processes to the K2 light curve and used that to determine kernel parameters for a quasi-periodic stochastic process. She then used those kernel parameters to fit the radial-velocity data to improve her constraints on the planet mass. She writes more in this paper. Her procedure involves quite a few assumptions, but it is cool because it is a kernel-learning problem, and she was explicitly invoking an interesting kind of generalizability (learning on light curve, applying to spectroscopy).

Late in the day I had a conversation with Jonathan Bird (Nashville) about the challenges of getting projects done. And another with Chris Lintott (Oxford) about scientific communication on the web and in the journals.

2019-01-08

reproducing old results

I spent a bit of research time making near-term plans with Storey-Fisher (NYU), who is developing new estimators of clustering statistics. Because clustering is two-point (at least), computational complexity is an issue; she is working on getting things fast. She has had some success; it looks like we are fast enough now. The near-term goals are to reproduce some high-impact results from some Zehavi papers on SDSS data. Then we will have a baseline to beat with our new estimators.

2019-01-07

expected future-discounted discovery rate

My tiny bit of research today was on observation scheduling: I read a new paper by Bellm et al about scheduling wide-field imaging observations for ZTF and LSST. It does a good job of talking about the issues but it doesn't meet my (particular, constrained) needs, in part because Bellm et al are (sensibly) scheduling full nights of observations (that is, not going just-in-time with the scheduling), and they have separate optimizations for volume searched and slew overheads. However, it is highly relevant to what I have been doing. It also had lots of great references that I didn't know about! They also make a strong case for optimizing full nights rather than going just-in-time. I agree that this is better, provided that your conditions aren't changing under you. If they are changing under you, you can't really plan ahead. Interesting set of issues, and something that differentiates imaging-survey scheduling from spectroscopic follow-up scheduling.

I also did some work comparing expected information gain to expected discovery rate. One issue with information gain is that if it isn't information gain in this exposure (and it isn't, because we have to look ahead), then it is hard to write down the information gain, because it depends strongly on future decisions (for example, if we decide to stop observing the source entirely!). So I am leaning towards making my first contribution on this subject be about discovery rate.

Expected future-discounted discovery rate, that is.

2019-01-06

target selection

On the weekend, Rix (MPIA) and I got in a call to discuss the target selection for SDSS-V, which is a future survey to measure multi-epoch spectroscopy for (potentially) millions of stars. The issue is that we have many stellar targeting categories, and Rix and my view is that targeting should be based only on the measured properties of stars in a small set of public, versioned photometric and astrometric catalogs.

This might not sound like a hard constraint, but it is: It means you can't use all the things we know about the stars to select them. That seems crazy to many of our colleagues: Aren't you wasting telescope time if you observe things that you could have known, from existing observations, was not in the desired category? That is, if you require that selection be done from a certain set of public information sources, you are ensuring an efficiency hit.

But that is compensated—way more than compensated—by the point that the target selection will be understandable, repeatable, and simulate-able. That is, the more automatic the target selection it is, from simple inputs, the easier it is to do populations analyses, statistical analyses, and simulate the survey (or what the survey would have done in a different galaxy). See, for example, cosmology: The incredibly precise measurements in cosmology have been made possible by performing simple, inefficient, but easy-to-understand-and-model selection functions. And, indeed: When the selection functions get crazy (as they do in SDSS-III quasar target selection, with which I was involved), the data become very hard to use (the clustering of those quasars on large scales can never be known extremely precisely).

Side note: This problem has been disastrous for radial-velocity surveys for planets, because in most cases, the observation planning has been done by people in a room, talking. That's extremely hard to model in a data analysis.

Rix and I also discussed a couple of subtleties. One is that not only should the selection be based on public surveys, it really should be based only on the measurements from those surveys, and not the uncertainties or error estimates. This is in part because the uncertainties are rarely known correctly, and in part because the uncertainties are a property of the survey, not the Universe! But this is a subtlety. Another subtlety is that we might not just want target lists, we might want priorities. Can we easily model a survey built on target priorities rather than target selection? I think so, but I haven't faced that yet in my statistical work.

2019-01-04

refereeing, selecting, and planning

I don't think I have done good job of writing the rules for this blog, because I don't get to count refereeing. Really, refereeing papers is a big job and it really is research, since it sometimes involves a lot of literature work or calculation. I worked on some refereeing projects today for a large part of the day. Not research? Hmmm.

Also not counting as research: I worked on the Gaia Sprint participant selection. This is a hard problem because everyone who applied would be a good participant! As part of this, I worked on demographic statistics of the applicant pool and the possibly selected participants. I hope to be sending out emails next week (apologies to those who are waiting for us to respond!).

Late in the day I had a nice conversation with Stephen Feeney (Flatiron) about his upcoming seminar at Toronto. How do different aspects of data analysis relate? And how do the different scientific targets of that data analysis relate? And how to tell the audience what they want to know about the science, the methods, and the speaker. I am a big believer that a talk you give should communicate things about yourself and not just the Universe. Papers are about the Universe, talks are about you. That's why we invited you!

2019-01-03

the limits of wobble

The day was pretty-much lost to non-research in the form of project management tasks and refereeing and hiring and related. But I did get in a good conversation with Bedell (Flatiron) with Luger (Flatiron) and Foreman-Mackey (Flatiron) about the hyper-parameter optimization in our new wobble code. It requires some hand-holding, and if Bedell is going to “run on everything” as she intends to this month, it needs to be very robust and hands-free. We discussed for a bit and decided that she should just set the hyper-parameters to values we know are pretty reasonable right now and just run on everything, and we should only reconsider this question after we have a bunch of cases in hand to look at and understand. All this relates to the point that although we know that wobble works incredibly well on the data we have run it on, we don't currently know its limits in terms of signal-to-noise, number of epochs, phase coverage in the barycentric year, and stellar temperature.

2019-01-02

finished a paper!

It was a great day at Flatiron today! Megan Bedell (Flatiron) finished her paper on wobble. This paper is both about a method for building a data-driven model for high-resolution spectral observations of stars (for the purposes of making extremely precise radial-velocity measurements), and about an open-source code that implements the model. One of the things we did today before submission is discuss the distinction between a software paper and a methods paper, and then we audited the text to make sure that we are making good software/method distinctions.

Another thing that came up in our finishing-up work was the idea of an approximation: As I like to say, once you have specified your assumptions or approximations with sufficient precision, there is only one method to implement. That is, there isn't an optimal method! There is only the method, conditioned on assumptions. But now the question is: What is the epistemological status of the assumptions? I think the assumptions are just choices we make in order to specify the method! That is, when we treat the noise as Gaussian, it is not a claim that the noise is truly Gaussian! It is a claim that we can treat it as Gaussian and still get good and useful results. Once again, my pragmatism. We audited a bit for this kind of language too.

We submitted the paper to the AAS Journals and to arXiv. Look for it on Thursday night (US time) or Friday morning!