In an argumentative session, we decided that everything we did and thought yesterday about combination of images was wrong, and re-started. The argument was long and complicated, but ended up delivering a very simple algorithm. The idea is to use the rank information in an input image to update or improve our beliefs about the rank information for pixels in a combined or reference image. The point of this is that we don't believe the intensity information in the images but we do believe that brighter parts are probably truly brighter. A lot of what made things complicated is that sometimes an input image covers only part of the reference image; in this case we only want to use it to reorder the pixels within its footprint.
In a not totally unrelated conversation we asked the following question: How can you combine the rolls of two six-sided dice such that you get a random integer uniformly distributed between 1 and 6? The constraint is: You must use the two dice symmetrically. One solution: Roll the two dice and then randomly choose one die and read it. We came up with a few others. You can't add the two dice rolls and divide by two, because then the result isn't uniformly distributed between 1 and 6. The central limit theorem is a hard thing to fight against. My favorite solution: Make a 6x6 table, in which the numbers from 1 through six each appear 6 times, but placed in the table randomly. Roll two dice, use the first to choose the row and the second to choose the column in the table. That's a hash, I think, mapping the two rolls (which jointly produce 36 different outcomes) onto 6 numbers.
At the end of the day, Lang and I used pixel rankings to identify human-viewable images that were built from the same source data. The idea is that the ordering of the noisy pixel values in the sky is like a "digital fingerprint". It seems to work like magic.