2022-11-30

degeneracies and optimization

With Emily Griffith (Colorado) I have been working on a purely data-driven nucleosynthetic model, trained on the abundances measured in stars by the APOGEE surveys. This model looks a lot like a non-negative matrix factorization, so it is a kind of model I've worked with many times in my life. We've figured out an optimization scheme and made it (exceedingly) fast with jax. Nonetheless, we have been having troubles with the optimization, getting stuck in bad local minima or even pathological locations in parameter space.

Today I discussed this model with Soledad Villar (JHU) who warned me that the model has potential pathologies, and strong degeneracies. I thought I was breaking these degeneracies with regularizations, but in fact the degeneracies are bigger than I thought. Villar's advice (which aligns with the machine learning zeitgeist) was to leave the degeneracies free and then rotate or transform the model to where I want it to be at the end. She also had useful advice about optimizing non-convex functions.

No comments:

Post a Comment