**UPDATE 1/15/2014**: This blog is no longer in service.

This post is now located at: http://slendermeans.org/ml4h-ch6.html

Thanks,

-c.

Advertisements

I'd like to drop my trousers to the world. I am a man of means, of slender means.

**UPDATE 1/15/2014**: This blog is no longer in service.

This post is now located at: http://slendermeans.org/ml4h-ch6.html

Thanks,

-c.

Advertisements

This entry was posted in Uncategorized. Bookmark the permalink.

%d bloggers like this:

I believe that the alpha parameter for glmnet is not the regularization parameter, but the Elastic Net weight that shifts between the LASSO and ridge penalties.

That’s right. (I think I note that in the notebook.) The loss functions are also apparently specified differently in glmnet and scikit-learn’s lasso, since they give penalty parameters of completely different scales (even though the coefficient estimates given those penalty parameters are the same). I haven’t pinned down the discrepancy yet.

Pingback: Will it Python? Machine Learning for Hackers, Chapter 7: Numerical optimization with deterministic and stochastic methods | Slender Means