Imagine you have an insane robot, randomly taking images of the sky. Every so often it takes an image that includes in its field of view a star in which you are interested. When it takes this image, it processes the image with some software you don't get to see, and it returns a magnitude for the star and an uncertainty estimate or else no measurement at all and no uncertainty information at all (because, presumably, it didn't detect the star, but you don't really know). I am in Berkeley this week to answer the following question: Do the null values—the times at which the robot gives you no output at all—provide useful information about the lightcurve of the star?
To make life more interesting, we are assuming that we don't believe the uncertainty estimates reported by the robot, and that the robot returns no information at all about upper limits or detection limits or the data processing, ever. That is, you don't know anything about the non-detections. You can't even assume that there are other stars detected in the images that are useful for saying anything about them.
Of course the answer to this question is yes
, as Joey Richards (Berkeley), James Long (Berkeley), and I are demonstrating this week. As long as you don't think that the robot's insane software is varying in sync with properties of your star, the nulls (non-detections) are very informative. We even have examples of badly observed variable stars, for which the use of the nulls is necessary to get a proper period determination.
The crazy thing is, in astronomy, there are many data sets that have this insane-robot property: There are lots of catalogs created by methods that no-one precisely knows, that are missing objects for reasons no-one precisely knows. And yet they are useful. We are hoping to make them much more useful.