I spent time on the long weekend and today working through the front parts of a new paper by Matthias Samland (MPIA) who is applying ideas we used for our pixel-level model for Kepler data to high-contrast (coronographic) imaging. Most high-performance data pipelines for coronograph imaging model the residual speckles in the data with a data-driven model. However, most of those models are spatial models: They are models for the imaging or for small imaging patches. They don't really capture the continuous time dependence of the speckles. In Samland's work, he is building temporal models, which don't capture the spatial continuity but do capture the time structure. The best possible methods I can imagine would capture some of both. Or really the right amount of both! But Samland's method is good for working at very small “inner working angle” where you don't have much training data for a spatial model because there just isn't that much space very near the null point.
No comments:
Post a Comment