At group meeting today Angus talked about her attempts to reproduce the asteroseismological measurements in the literature from Kepler short-cadence data. I think there is something missing, because we don't observe all the modes as clearly as they do. Our real goal is not just to reproduce the results of course; we discussed our advantages over existing methods: We have a more realistic generative model of the data; we can do multiple frequencies simultaneously; we can handle not just non-uniform time sampling but also non-uniform exposure times (which matters for high frequencies), and we can take in extremely non-trivial noise models (including ones that do detrending). I am sure we have a project and a paper, but we don't understand our best scope yet.
Just before lunch, Kyunghyun Cho (Montreal) gave a Computer Science colloquium about deep learning and translation. His system is structured as an encoder, a "meaning representation", and then a decoder. All three components are interesting, but the representation in the middle is a model system for holding semantic or linguistic structure. Very interesting! He has good performance. But the most interesting things in his talk were about other kinds of problems that can be cast as machine translation: Translating images into captions, for example, or translating brain images into sentences that the subject is currently reading! Cho's implication was that mind reading is just around the corner...
This time domain asteroseismology stuff is going to make Angus et al very rich!
ReplyDelete