So I have been working on this toy problem: Imagine you have a scene, and each "image" you have of the scene contains only a few photons. Each image, furthermore, is taken at a different, unknown angle. That is, you don't know either the scene nor any image's angle, and each image contains only a few photons. This is a toy problem because (a) it is technically trivial, and (b) to my knowledge, no-one has this problem to solve! (Correct me if I am wrong!). I am using this problem as a technology development platform for the diffraction microscopy projects I want to do.
One piece of infrastructure I built today is a stochastic-gradient optimizer, that reads in just one image at a time (just a few photons at a time) and takes a gradient step based only on those new photons. This is standard practice these days in machine learning, but I was skeptical. Well it rocked. The image below (believe it or not) is my reconstruction of my scene. I also show four of the “images” that were used as data. The images are just scatterings of photons, of course. (As my loyal reader knows, the objective of the optimizer is a marginalized likelihood.)
I also had a great lunch with Kathryn Johnston, in which we discussed our past and future projects in Milky Way astrophysics and why we are doing them.
That is a really amazing result! How many images did you use to get the final reconstruction? And what was the actual toy "object" being imaged?
ReplyDeleteDon't know if you write these mainly for yourself or for the "public," but thought you might like to know that I enjoy reading them!
I think I worked out what it says... :)
ReplyDelete