As my loyal reader knows, I am getting all interested in measuring stellar oscillations—asteroseismology—in data that have integration times (or sampling intervals) too long. For example, G dwarfs have oscillation periods in the 5-minute range, whereas the Kepler data is (by and large) 30-min exposures on 30-min centers. The Kepler data are typical for astronomy, but perhaps not typical examples for "Nyquist sampling" problems, in part because the exposures are integrations (or projections or finite-time averages) rather than samples of the stellar time series that we care about, and in part because the finite-diameter spacecraft orbit makes the periodic-in-spacecraft-time sampling aperiodic in barycentric time.
The integration point hurts me (it attenuates the amplitude of the super-Nyquist signal) but the aperiodicity helps. I discovered today, however, that I don't even need the aperiodicity: All I need is a good model (causal model, I probably should say) of how the data are generated by the stellar signal. I find that if I properly model the integration time, I can see the short-period signals in the data (or at least in Kepler-like fake data). This isn't surprising; it is like "side-band" frequency information in the standard Nyquist case. The key idea behind all this is that we are not ever going to take a Fourier transform or a Lomb-Scargle periodogram; these tools give you the frequencies in the integration-time-convolved stellar signal. We (Angus, Foreman-Mackey, and I) are going to model the stellar signal prior to convolution with the exposure time window.