Bill Freeman (MIT) came into town for the day today. In the morning he showed us work his group has been doing to infer the flow of turbulent air through a scene in which hiqh-quality video has been shot. By using assumptions about the background scene, he can look for the motion of the (extremely tiny) optical distortion pattern in the scene and try to get the air movement. Applications include airport safety and astronomical imaging, where in principle observations of resolved objects (Freeman is interested in the Moon, as I have mentioned previously) could be used to build a model of the atmosphere and improve image quality (or adaptive optics control loops).
In the afternoon, Freeman showed (in a seminar) his work on motion amplification. He can take tiny, tiny motions in a video stream (think: your pulse, your breathing, the swaying of a rigid building) and amplify them, but importantly without building a model of the motion. His highest-performing systems work so simply: Take a spatial Fourier Transform of the images in the video stream, and amplify the Fourier-Component phases (within some window function, and with some filtering, and so on). This method amplifies the motion but doesn't amplify the noise in the image, because it doesn't amplify the intensity or amplitude of any mode or component. The results are astounding! Interesting to think of astronomical applications: One might be prediction for the future of time-variable nebulae like V838 Monocerotis and supernova 1987A and η Carinae.