2016-09-30

stars at low resolution, pair coding, accretion

Today Kathryn Johnston (Columbia) convened the first of (we hope) many Local-Group group meetings up at Columbia, with people present from NYU, CCA, Columbia, and Princeton. Johnston reported on things she learned at the recent meeting in Paris. Of particular interest to her—and everyone, apparently—was work by Yuan-Sen Ting (Harvard), building on experiments by Anna Y. Q. Ho (Caltech), showing that it is possible to get detailed stellar abundances, even without huge covariances, out of low-resolution spectra. The point is that as resolution decreases, the information per line gets worse, but you also (usually) get more spectral coverage, and this (mostly) compensates. This could have a huge impact on the future of stellar astrophysics.

I spent time today pair-coding (with Adrian Price-Whelan, Princeton) the analytic marginalized likelihood that Semyeong Oh (Princeton) and Price-Whelan and I have been working on. We found a couple bugs and by the end of the screen-sharing video call (yes, that's the way we do it), we had a marginalized likelihood ratio that seems to be delivering very good answers, and fast! Very excited.

The research day ended with a great astrophysics seminar at NYU by Zoltan Haiman (Columbia, NYU) about fast growth of black holes in the early Universe. He has found a spherically symmetric, steady-state, achievable accretion process that is (much) faster than Eddington, using the same assumptions (essentially). I need to think about it and understand it better. The Eddington limit is one of the most secure, robust, and well-tested arguments in all of astrophysics!

2016-09-29

radio stars, comoving stars, orbiting stars

In the morning I met with Kelle Cruz (CUNY) and Ellie Schwab (CUNY), to discuss the statistics component of their project to measure and model radio emission from brown dwarf stars. We worked through a mixture model, in which some are emitting in the radio and some aren't, and how we could do inference in that model.

Having yesterday written math for the comoving stars paper with Adrian Price-Whelan (Princeton) and Semyeoung Oh (Princeton), today I wrote a draft title and abstract. My view is that projects should more-or-less start with a title and abstract, in part because these are the most important parts of the paper, and in part because it helps guide work towards the true critical path.

The research day ended with a Physics Colloquium by Andrea Ghez (UCLA). She talked about the stellar orbits in the Galactic Center and their demonstration of the existence of a black hole there. She showed that (in principle) the black hole was discovered in the 1980s, but the discoverers were very circumspect and conservative. There are lots of remaining puzzles and projects, with existing data and new data. As I said yesterday, this is a very fruitful context for thinking about new engineering challenges.

2016-09-28

we do it all: engineering, stars, galaxies

Today was a good research day at the CCA. The day started with Adrian Price-Whelan (Princeton) and I arguing about the cleanest notation in which to cast our complete-the-squares math that is relevant to our wide-binary marginalization efforts. Once we decided, I went off to write LaTeX, and Price-Whelan and Semyeoung Oh (Princeton) went off to pair-code it.

In our stars group meeting, Andrea Ghez (UCLA) dropped in; we talked about engineering improvements that could make the observations they do of the (time-variable, crowded) Galactic Center much more productive and precise. These ranged from adaptive coronography (something I would love to think about) to data-analysis methods that can infer the properties of sources too faint or too crowded to individually measure at high precision. Oddly, there is a comedy of the commons in which technical advances we want for exoplanet research would almost all be useful also in the Galactic Center.

Also in the stars meeting, we put Semyeoung Oh on the spot, getting her to visualize what we know about widely separated pairs of comoving stars in the Gaia DR1 TGAS sample. She was able to show us that at least some of our widely separated pairs are members of young, open clusters. She was also able to show us that the photometric properties of the stars are consistent with the stars being young and the pairs being short-lived. It was an extremely impressive session, because everyone in the room was shouting out changes they wanted to see in the notebook, and she just calmly executed.

In the Blanton–Hogg cosmology group meeting, we talked about AS4 proposals—the proposals for what to do after the end of SDSS-IV. Most of these are about stars, but there are some about spatially resolving more galaxies. We discussed a bit what we expect from this process. After that, MJ Vakili (NYU) took us through the definition of assembly bias, and his work to show that this effect is likely present—but at a low level—in the SDSS galaxy samples. That led to a more general discussion of the occupation of galaxies in the dark-matter field, which is something I fantasize about working on, from a data-driven perspective. I happened to run into Roman Scoccimarro (NYU) on my way home, and he disabused me of some of my dumbest ideas there.

2016-09-27

linear algebra; and spectroscopic parallax

I spent some stolen research time today working out a simple notation for completing the square in the marginalization that Adrian Price-Whelan (Princeton), Semyeong Oh (Princeton), and I are working on for the Gaia DR1 TGAS data. It isn't hard, but you sure have to keep your head screwed on when non-square matrices are flying around, and some matrices have zero or infinite eigenvalues.

Anna Y. Q. Ho (Caltech) and I discussed things she might do at the #GaiaSprint next month. One option would be to figure out how you can infer parallax from spectrum, or spectrum from parallax. The big issue for naive approaches is that the distance or absolute magnitude uncertainties are asymmetric (think Lutz-Kelker bias and all that), but parallax uncertainties are symmetric. I suggested that we could work in the inverse-square-root-luminosity space (yes, insane) for modeling purposes and see if that helps? We would also want to use the extension of The Cannon built by Christina Eilers (MPIA) this past summer, to deal with uncertainties in the labels.

2016-09-26

stars that were born together

I spent research time today working through the first draft on a paper by Melissa Ness (MPIA) about the chemical homogeneity of clusters of stars. She is using very close stars in abundance space to look at what we can say about stars that are (and aren't) born together. I also spoke with Adrian Price-Whelan (Princeton) about marginalizing the likelihoods for our binary-star and not-binary-star hypotheses in Gaia DR1 TGAS. After some terrifying experiments over the weekend with numerical marginalizations, we decided that we have to bite the bullet and do the analytic marginalization, which requires completing the square. We both agreed to write down math.

At lunch time I gave the Brown-Bag talk at the CCPP. I spoke about Ness's work, and also about Price-Whelan's work. I see these things as related, because Ness finds stars that are clearly co-eval in chemical space, while Price-Whelan finds stars that are clearly co-eval in phase space.

2016-09-23

fitting spectroscopic systematics and marginalizing out #GaiaDR1

In the morning, I discussed new NYU graduate student Jason Cao's project to generalize The Cannon to fit for radial-velocity offsets and line-spread function variations at test time. This involves generalizing the model, but in a way that doesn't make anything much more computationally complex.

In the afternoon, I had a realization that we probably can compute fully marginalized likelihoods for the wide-separation binary problem in Gaia DR1 TGAS. The idea is that if we treat the velocity distribution as Gaussian, and the proper-motion errors as Gaussian, then at fixed true distance there is an analytic velocity integral. That reduces the marginalization to only two non-analytic dimensions (the true distances to the two stars). I started to work out the math and then foundered on the rocks of completing the square in the case of non-square matrix algebra. No problem really; we have succeeded before (in our K2 work).

2016-09-22

binaries, velocities, Gaia

Early in the day, I discussed with Hans-Walter Rix (MPIA) the wide-separation binaries that Adrian Price-Whelan (Princeton) and I are finding in the Gaia DR1 data. He expressed some skepticism: Are we sure that such pairs can't be produced spuriously by the pipelines or systematic errors? That's important to check; no need to hurry out a wrong paper!

Late in the day, I had two tiny, eensy breakthroughs: In the first, I figured out that Price-Whelan and I can cast our binary discovery project in terms of a ratio of tractable marginalized likelihoods. That would be fun, and it would constitute a (relatively) responsible use of the (noisy) parallax information. In the second, I was able to confirm (by experimental coding) the (annoyingly correct) intuition of Dan Foreman-Mackey (UW) that the linearized spectral shift is not precise enough for our extreme-precision radial-velocity needs. So I have to do full-up redshifting of everything.

2016-09-21

group meetings

At my morning group meeting, Will Farr (Birmingham) told us about CARMA models and their use in stellar radial velocity analysis. His view is that they are a possible basis or method for looking (coarsely) at asteroseismology. That meshes well with things we have been talking about at NYU about doing Gaussian Processes with kernels that are non-trivial in the frequency domain to identify asteroseismic modes.

In the afternoon group meeting, we had a very wide-ranging conversation, about possible future work on CMB foregrounds, about using shrinkage priors to improve noisy measurements of SZ clusters and other low signal-to-noise objects, We also discussed the recent Dragonfly discovery of a very low surface-brightness galaxy, and whether it presents a challenge for cosmological models.

2016-09-20

data-driven models of images and stars

Today was a low-research day! That said, I had two phone conversations of great value. The first was with Andy Casey (Cambridge), about possibly building a fully data-driven model of stars that goes way beyond The Cannon, using the Gaia data as labels, and de-noising the Gaia data themselves. I am trying to conceptualize a project for the upcoming #GaiaSprint.

I also had a great phone conference with Dun Wang (NYU), Dan Foreman-Mackey (UW), and Bernhard Schölkopf (MPI-IS) about image differencing, or Wang's new version of it, that has been so successful in Kepler data. We talked about the regimes in which it would fail, and vowed to test these in writing the paper. In traditional image differencing, you use the past images to make a reference image, and you use the present image to determine pointing, rotation, and PSF adjustments. In Wang's version, you use the past images to determine regression coefficients, and you use the present image to predict itself, using those regression coefficients. That's odd, but not all that different if you view it from far enough away. We have writing to do!

2016-09-19

measuring and modeling radial velocities

Dan Foreman-Mackey (UW) appeared for a few days in New York City. I had various conversations with him, including one in which I sanity-checked my data-driven model for radial velocities. He was suspicious that I can take the first-order (linear) approximation on the velocities. I said that they are a thousandth of a pixel! He still was suspicious. I also discussed with him the point of—and the mathematical basis underlying—the project we have with Adrian Price-Whelan (Princeton) on inferring companion orbits from stellar radial-velocity data. He agrees with me that we have a point in doing this project despite its unbelievably limited scope! Remotely, I worked a bit more on the wide-separation binaries in Gaia DR1 with Price-Whelan.

2016-09-18

data-driven radial velocities

In my weekend research time, I worked out a fully data-driven method for measuring radial velocities in an extreme-precision (or even normal-precision) spectroscopic survey. The idea is to simultaneously fit for the spectrum of the star and its radial-velocity offset; you need multiple epochs of observations to get both (at least at high signal-to-noise). Because the model is fully data-driven, it won't give absolute radial velocities; it will only give relative velocities. That's always the cost of being data driven—the loss of interpretability.

I also added to the model some flexibility to capture spectral variations with time, especially those that might project onto the radial-velocity direction or measurement. That would permit us to discover and characterize spectral changes that co-vary with surface radial-velocity perturbations or jitter. I am trying to write something down that would be practical to apply to HARPS data, but I'm all theory right now.

2016-09-16

Gaia thinking

I continued to think about and write about Gaia DR1 projects today. In particular, I tried to write down a responsible way to measure the standardness of standard stars, given noisy parallaxes. I also tried to understand whether we have a scope and interesting-enough results on wide-separation binary stars to merit a paper.

2016-09-15

cross-correlation, Disco, Gaia, and life

My day started with a discussion of determination of stellar radial velocities by cross-correlation with a weighted mask (as is done in the HARPS pipeline) with Megan Bedell (Chicago). We talked about the subtleties of doing this, when there are partial-pixel shifts and we want answers that are continuous.

There was a substantial phone call today organized by Jonathan Bird (Vanderbilt) to talk about the After-SDSS-IV proposal for use of the 2.5-m telescope and its instruments. We are working on a proposal called Disco to do dense sampling of the Milky Way disk (looking in the infrared through the dust).

By text message, Adrian Price-Whelan (Princeton) and I tentatively decided that we would pursue a paper about wide binaries with the Gaia DR1 T-GAS data that we were exploring yesterday. I really hope we have a paper to write, because it would be fun to be in the first set of papers. Of course Gaia DR1 papers appeared on the arXiv already tonight!

At the end of the day, Sean Solomon (Columbia) gave the Departmental Colloquium, about Mercury and the Messenger mission. It was a great talk, showing that there is water and volatiles on Mercury, as not expected in naive models. At the end, Jasna Brujic (NYU) asked him about life on Mercury and elsewhere in the Solar System. He described the evidence that rocks are thrown from planet to planet and expressed the view (also held by me) that it is quite likely that there is life elsewhere in the Solar System. That made me happy!

2016-09-14

#GaiaDR1 zero-day

Today (at 06:30 New York time) Gaia released it's DR1 data, and in particular the T-GAS sample of stars with five-parameter solutions and photometry. What a great day it was! I assembled with Kathryn Johnston (Columbia), David Spergel (Princeton), Adrian Price-Whelan (Princeton), Ruth Angus (Columbia), Keith Hawkins (Columbia), and others to get, play with, and make figures from the new data. Many amusing things happened, and this blog post will not capture them all.

Hawkins immediately plotted the velocity distribution of disk stars in the U-V plane, using the overlap between T-GAS and RAVE. He confirms the velocity structure Bovy, Roweis, and I predicted based on (clever, if I say so myself) de-projections of the Hipparcos data. Right as we were looking at this, Bovy tweeted the same thing. Hawkins has access to our RAVE-on data with detailed abundances, so he can show that the velocity structures are chemically inhomogeneous; the questions that are easy to ask are: Are they all inhomogeneous in the same ways, or are there differences? And can we see any spatial dependence within T-GAS of the velocity structure? He moved on to looking at the candle-standardness of the red clump.

Andy Casey (Cambridge), working remotely, made temperature-magnitude diagrams for the RAVE-on sample. I asked him to show what happens as you harden the cut on the parallax signal-to-noise (parallax over parallax uncertainty). He tweeted the answer. It really looks like we might be able to use Gaia to build a completely data-driven model of all aspects of stars.

Price-Whelan and I looked at various things. We started by trying to see if there is vertical velocity structure in the nearby disk that might show evidence for disk warping, or horizontal velocity structure that might look like spiral arm perturbations. The figures are confusing! There seems to be a very cold bubble around the Sun in the Galactocentric U velocity, which is odd. After spending lots of time confused about that, we looked for very wide separation binary stars, and we see lots! Indeed, it looks like we have evidence for binaries with separations larger than 1 pc! That's worth following up, especially if we have any overlapping spectra. Finally, Price-Whelan also showed that the Kepler-identified transiting exoplanet host stars are all on disk orbits; that is, we don't have (yet) any halo exoplanets. But these are early days!

That's just a tiny slice of the things we started to think about and play with. It is the beginning of a new era. Thank you to the Gaia Mission and all the people who gave years of their professional and scientific lives to this project.

2016-09-13

a tiny bit of Gaia

Today was taken out by teaching and the job season, but in my tiny bit of research time, I worked on what I am going to do tomorrow with the Gaia DR1 data. That got me on the phone to Adrian Price-Whelan. We talked about Gaia and also our binary-star sampler.

2016-09-12

nearly ready for Gaia DR1

I got obsessed on the weekend, and even more today, with the problem of getting the Gaia DR1 data in the form of a complete flat file, on Wednesday morning (that is, in two days) when the data release happens. Various sources have told us that we can't get a flat file, we have to do thousands to hundreds of thousands of remote database queries! In my fury about this, I tweeted (yes, twittered) at Jos de Bruijne (ESTEC), who promised me that the flat file would be available at the ESA Archive right away. Whew!

I decided that everyone on my team who is thinking about Gaia needs to figure out, today or tomorrow, what plot are they going to make? with the new data. That is, it is too big to think What science question am I going to ask?, so we should ask the simpler question about the plot / figure. I will send email to everyone about this tomorrow. I also noted the release in an email out to the #GaiaSprint participants.

In unrelated news, Andy Casey (Cambridge) and I discussed what we need to do to get ready for SDSS-IV APOGEE2 DR14. As my loyal reader knows, we are providing value-added content to DR14 using The Cannon. We worked out what our minimum deliverable is, what inputs and outputs that entails, what script writing that entails, and left it with Casey to communicate all that. We might deliver more than the minimum, but we want to only promise the minimum.

2016-09-11

writing about radial velocity inference

I spent work on the weekend writing in Adrian Price-Whelan (Princeton) and my paper on exact sampling for the exoplanet or binary-star radial velocity problem. In addition to the writing, we spent a lot of time talking out the experiments that we want to show in this paper that are demonstrative of the power of the method. But my main contribution today was to blow out a discussion section as fast as possible. In our paper template, the discussion section talks about the implications of the work, but more importantly what has to change if the assumptions turn out to be wrong (and they do!). I wrote fast, obeying the motto “write drunk, edit sober.”

2016-09-09

finding spectral twins in pixel space

As my loyal reader knows, I am a big believer in trying to use the immense data sets we have on stars (in particular spectral data sets) to build data-driven models that have some kind of interpretability. The problem is interpretability, because purely data-driven models are uninterpretable by construction. My biggest success along these lines is The Cannon, first built by Melissa Ness (MPIA) and followed up by Anna Ho (Caltech) and Andy Casey (Cambridge). An amusing and mostly true summary of machine learning is that all supervised methods are, fundamentally, nearest-neighbor methods. This suggests that we might be able to make massive progress if we just started to look at stars with identical or near-identical spectra. And of course I mean in the space of spectral pixels, not in the space of labels derived from those pixels. I pitched this project to Marc Williamson (NYU) today, and sent him off with some reading. We are going to look for twins, but accounting for variations in radial velocity and line-spread function, so it won't be completely trivial.

Megan Bedell (Chicago) pointed out to me today that the binary masks used by the HARPS pipeline appeared on arXiv today. That's big for our radial-velocity projects.

2016-09-08

Simple Monte Carlo; a noise model

After the successes of yesterday on our custom radial-velocity sampler, currently called The Joker (but not pronounced how you might think), I put some time into writing the method section. One complex point of the sampling, which is fundamentally not Markov but instead just Simple Monte Carlo, is that if the SMC doesn't lead to many surviving samples, we either do more SMC or else switch over to a standard MCMC, initialized by the output of the SMC. That took some design thought; it capitalizes on an important point of problem structure, which is that—given a finite time window of observations—there is a finite resolution to likelihood peaks in the period direction. It remains to be seen if what we have designed will work.

Early in the day, I spoke with Andy Casey (Cambridge) about a possible noise model for the label outputs from The Cannon acting on the RAVE data. As my loyal reader knows, we consider the formal uncertainties coming from The Cannon to be under-estimates. It sounds like Casey has good evidence for a noise floor, which can be added in quadrature and make repeat visits to spectra more-or-less consistent. It's do-or-die because he needs to submit this paper today or tomorrow!

2016-09-07

first group meeting at the SCCA

The single best thing today was the first meeting of the new joint group meeting of observational astrophysicists at the brand-spanking-new Simons Center for Computational Astrophysics. In addition to the union of Mike Blanton (NYU), Anthony Pullen (NYU), and my groups, the Director of the Center, David Spergel (SCCA) was there. We had great attendance; introductions alone took more than 90 minutes. Highlights for me included the following: A brief argument broke out about binary stars: Do we really know that both pairs of binaries have the same chemical abundances in detail? Spergel pointed out that the high-velocity stars that Keith Hawkins (Columbia) is finding could have implications for the early universe and the escape fraction of ultraviolet photons. Pullen talked about finding the SZ effect in filaments, Spergel mentioned work on identifying filaments using good machine learning by Shirley Ho (CMU), and MJ Vakili (NYU) talked about the work he has done on halo occupation, which could (in principle) take filament-environment as inputs. Hawkins also talked about a Gaia DR1 zero-day project; Adrian Price-Whelan (Columbia) and I promised to start off next week's group meeting with a visualization of the DR1 data, which at that point will be hours old!

Before and after group meeting, Price-Whelan and I worked on our binary-star (and exoplanet) sampler, building and executing experiments, and writing in the document. One amusing thing is that Megan Bedell (Chicago) gave us some (proprietary) exoplanet radial-velocity data to fit, where she finds one period but thinks there might be more. We found only her one period; confirming her results, but we found that we could get other possible periods if we drop her first data point or blow up her error bars (uncertainties).

One thing that came up in group meeting and I discussed afterwards with Hawkins is the project by Andy Casey (Cambridge), Hawkins, me, and many others to release a reanalysis of the RAVE data that overlap the Gaia DR1 T-GAS sample. Casey and Hawkins have detailed abundances for the red-giant stars, but not the main-sequence stars. I asked Hawkins why the main sequence is so much harder? He stunned me by saying that in fact he thinks the main sequence ought to be easier; we just have red-giant abundances now because people have worked harder on them. There's some medium-hanging (I won't say low-hanging!) fruit right there!

2016-09-06

the best exoplanet spectrographs are much better than m/s

Today was the first day of the new academic year, so much of my day was obliterated by the most fun part of my job, which is teaching! That said, I still got in time for conversations with Megan Bedell, Dun Wang, and Boris Leistedt. I presented to Bedell my causal argument that instruments like HARPS are actually delivering much better than one meter per second precision and that the substantial scatter seen is because of the stars, not the instruments (and not the software pipelines). The argument is about the lack of covariance between measured radial velocities and (say) wavelength calibration parameters: Even if the noise contributing to wavelength calibration jitter is uncorrelated with the noise contributing to stellar jitter, it should show up as a covariance between calibration parameters and stellar radial velocity measurements. The argument is subtle, causal, and uncertain (because I am bad at this kind of reasoning). But if I am right, we don't need better instruments, and we don't need better pipelines. We need better models of stars!

Dun Wang and I discussed near-term and medium-term publishing plans. The top priority is to finish his paper on image differencing. I asked him to work hard on explaining how it is totally different from all other image differencing methods, because it uses the past images to learn regression coefficients, but builds the model of the present (target) image from other pixels in that image itself. That is, it is more general image modeling, really. And that's why it performs so well! Of course it requires a great data set for training.

Leistedt is using Gaussian Processes inside a physical model for galaxy observations given Doppler shifts. This is a completely flexible data-driven model, but constrained to obey the redshift physics implied by special relativity. That makes for a very powerful method for predicting galaxy colors at other redshifts, given an observation at a single (training) redshift. He can make photometric redshift predictions, make k corrections (my favorite), simulate future data, and train the photometric redshifts in survey A from a training set that exists only in survey B. All awesome! We discussed the scope of paper zero.

2016-09-02

#AstroHackWeek, day five

Last day of #AstroHackWeek! What a week it has been, for me, anyway. So many projects. In the final wrap-up session, I was just blown away by each hack and accomplishment. You can get some feel for it at the hackpad. That site is subject to change and editing, so it isn't a static result, but you should still get a sense of the awesome.

In the wrap-up, Matt Mechtley (ASU) surprised me by showing that our molecular reconstruction does not depend extremely sensitively on the assumption that the molecules are viewed from an isotropic direction distribution! That is, he generated data with a (very strong) dipole pattern in the direction distribution, not aligned with any symmetry of the molecule, and we still reconstruct well. That bodes very well for the method.

Earlier in the day, I discussed with Dalya Baron (TAU) what directions to move in on the molecular imaging project. She decided to move the model towards the proper diffraction imaging model, with the photons generated not by the molecule directly in real space but by the squared norm of the Fourier Transform of the molecule. Because our methodology depends on having correct, analytic derivatives of a marginalized likelihood, this requires taking our derivatives through a sum of products of Fourier Transformed basis functions and their complex conjugates. Oddly, Baron wants to figure that out and code it up! (Oh, and unit tests, of course.) I'm impressed, and excited about the results. Watch this space.

2016-09-01

#AstroHackWeek, day four

Today was the fourth day of #AstroHackWeek 2016. My goodness is everyone tired! A week of 10-to-15-hour days is a bit too much! Fortunately, the crowd was woken up by two awesome talks on optimization, one by Yu Feng (UCB) and one by Grigor Aslanyan (UCB). Feng talked about optimization in general, classifying methods, and helping us to understand the use of gradients and hessians, and trust regions. Aslanyan focused on the use of optimization to solve linear-algebra problems, and explained extremely clearly the conjugate gradient method. These talks, taken together, constituted the best live introduction to optimization I have ever seen. These were followed by Dan Foreman-Mackey (UW), who explained, in his usual extremely clear and engaging way, the basics of sampling, assessing convergence, and presentation of sampling-based results. So excited to get our pedagogical paper on MCMC out there.

I pair-coded with Adrian Price-Whelan (Princeton) our new project to sample the single-line binary star (or expolanet) system exactly using Simple Monte Carlo. Our innovation is to sample explicitly only in the non-linear parameters, and to deal with the linear parameters through exact marginalization at the rejection step, and through exact sampling at the output step. This all seems to work, and we can see the posterior pdf become less multi-moded as the number (or density) of observations increases.

Meanwhile, Dalya Baron (TAU) and Matt Mechtley (ASU) changed the toy microscopy problem to a better test problem (we had too much symmetry in our toy “molecule”). Everything works end-to-end, so in the end-of-day stand-up meeting, Baron stood up and explained our marginalized likelihood and how we are optimizing it, and Mecthley showed the data and the results. These pretty-much shocked the audience: Our images are so bad (a few photons each) and yet such a complex molecule can be inferred. We ended the day by discussing what directions to go next with this; I am a bit confused about the scope for paper one.