There is a Monday seminar at Princeton run by the astrophysics graduate students that focuses on useful skills and knowledge around research, rather than research results. That's a good idea!
I gave the seminar today; I spoke about machine learning in astronomy. I started with my ML taxonomy and my recommendation to understand five beautiful, simple, and instructive examples: SVM, linear regression, PCA, k-means, and GMM with the EM algorithm. How's that for acronyms! I think each of these five methods is so beautiful, everyone should know how each of them works and generalizes.
Each of these methods is in a different taxonomic category (in order: classification, regression, dimensionality reduction, clustering, and density estimation), and each is beautiful. The first three are linear and convex, and each (for related reasons) can be generalized with the kernel trick. In the second half of my talk I discussed this, but my explanation went off the rails. I think I left everyone confused. Time to do more homework.