My only real research today was a short conversation with Soledad Villar (NYU) about generative models. She did a nice experiment in which she tried to generate (with a GAN) a two-dimensional vector with a one-dimensional vector input; that is, to generate at a higher dimension than the input space. It didn't work well! That led to a longer discussion of deep generative models. I opined that GANs have their strange structure to protect the generator from having to actually have support in its generative space on the actual data. And she showed me some new objective functions that create other kinds of deep generative models that might look a lot more like likelihood optimizations or something along those lines. So we decided to try some of those out in our noisy-training, de-noising context.
No comments:
Post a Comment