2012-12-19

support vector machines

Krik Muandet (MPI-IS) led a discussion today of support vector machines and extensions thereof. For me the most valuable part of the discussion was an explanation of the kernel trick, which is absolutely brilliant: It projects the data up into a much higher-dimensional space and permits accurate classification without enormous computational load. Indeed, the kernel obviates any creation of the higher dimensional space at all. Muandet then went on to discuss his support measure machine extension of SVM; in this extension the points are replaced by probability distribution functions (yes, the data are now PDFs). I was pleased to see that the SMM contains, on the inside, something that looks very like chi-squareds or likelihood ratios. Jonathan Goodman asked how the SMM classifier differs from what you would get if you just ran SVM on a dense sampling of the input PDFs. Of course it has to differ, because it takes different inputs, but the question, correctly posed, is interesting. We ended the talk with lunch, at which we asked Muandet to do some demos that elucidate the relationship between SVM and nearest neighbors (Fadely's current passion).

I spent a big chunk of the afternoon discussing gravitational radiation detection with Lam Hui (Columbia). I think we have resolved the differences that emerged during his talk here this semester.

No comments:

Post a Comment