In our code notebook today, with repeated numerical experiments, Soledad Villar (NYU) and I demonstrated that we can, in some settings, regularly beat the Gauss–Markov estimator, which is the standard linear discriminative regression estimator. We are working in a toy world where everything is linear and Gaussian! And the Gauss—Markov estimator has very good properties, and provably, so what gives? The main thing that gives (we think) is that we are adding noise to the features, not just the labels. This case is surprisingly thorny. When there is noise only in the labels, all is cool. As soon as you add noise to the features, all h*ck breaks loose.
No comments:
Post a Comment