对于高维的广义线性模型,传统的是没有 l 1 l_1 l1惩罚项,有些时候我们需要加入惩罚项就得自己写优化函数。后来发现glmnet可以解决这样的问题。
glmnet包在处理具有 l 1 l_1 l1 l 2 l_2 l2惩罚项的似然函数问题是非常高效的,可以很好得利用X矩阵的稀疏性。
Lasso回归复杂度由参数lambda来控制,lambda越大模型复杂度的惩罚力度越大,从而获得一个较少变量的模型。除了参数lambda,还有参数alpha,控制对高相关性数据时建模的形状。使用Lasso回归,alpha=1(R语言glmnet的默认值),brigde回归,alpha=0,一般的elastic net 0<alpha<1.

根据Hastie(斯坦福统计学家), Tibshirani和Wainwright的Statistical Learning with Sparsity(The Lasso and Generalizations),如下五类模型的变量选择可采用R语言的glmnet包来解决。这五类模型分别是:

函数介绍

glmnet()

glmnet(x, y, family = c("gaussian", "binomial", "poisson", "multinomial",
"cox", "mgaussian"), weights = NULL, offset = NULL, alpha = 1,
nlambda = 100, lambda.min.ratio = ifelse(nobs < nvars, 0.01, 1e-04),
lambda = NULL, standardize = TRUE, intercept = TRUE,
thresh = 1e-07, dfmax = nvars + 1, pmax = min(dfmax * 2 + 20,
nvars), exclude = NULL, penalty.factor = rep(1, nvars),
lower.limits = -Inf, upper.limits = Inf, maxit = 1e+05,
type.gaussian = ifelse(nvars < 500, "covariance", "naive"),
type.logistic = c("Newton", "modified.Newton"),
standardize.response = FALSE, type.multinomial = c("ungrouped",
"grouped"), relax = FALSE, trace.it = 0, ...)

功能介绍

Fit a generalized linear model via penalized maximum likelihood. The regularization path is computed for the lasso or elasticnet penalty at a grid of values for the regularization parameter lambda. Can deal with all shapes of data, including very large sparse data matrices. Fits linear, logistic and multinomial, poisson, and Cox regression models.

参数介绍

xinput matrix, of dimension nobs x nvars; each row is an observation vector. Can be in sparse matrix format (inherit from class “sparseMatrix” as in package Matrix; not yet available for family=“cox”)
yresponse variable. Quantitative for family=“gaussian”, or family=“poisson” (non-negative counts). For family=“binomial” should be either a factor with two levels, or a two-column matrix of counts or proportions (the second column is treated as the target class; for a factor, the last level in alphabetical order is the target class). For family=“multinomial”, can be a nc>=2 level factor, or a matrix with nc columns of counts or proportions. For either “binomial” or “multinomial”, if y is presented as a vector, it will be coerced into a factor. For family=“cox”, y should be a two-column matrix with columns named ’time’ and ’status’. The latter is a binary variable, with ’1’ indicating death, and ’0’ in dicating right censored. The function Surv() in package survival produces such a matrix. For family=“mgaussian”, y is a matrix of quantitative responses.
familyResponse type (see above). Either a character string representing one of the built-in families, or else a glm() family object.
weightsobservation weights. Can be total counts if responses are proportion matrices. Default is 1 for each observation
offsetA vector of length nobs that is included in the linear predictor (a nobs x nc matrix for the “multinomial” family). Useful for the “poisson” family (e.g. log of exposure time), or for refining a model by starting at a current fit. Default is NULL. If supplied, then values must also be supplied to the predict function.
alphaThe elasticnet mixing parameter, with 0 ≤ α ≤ 1 0 ≤ \alpha ≤ 1 0α1. The penalty is defined as ( 1 − α ) / 2 ∥ β ∥ 2 2 + α ∥ β ∥ 1 (1 − \alpha)/2\|\beta\|_2^2 + \alpha \|\beta\|_1 (1α)/2β22+αβ1. alpha=1 is the lasso penalty, and alpha=0 the ridge penalty.
nlambdaThe number of lambda values - default is 100.
lambda.min.ratioSmallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. If nobs > nvars, the default is 0.0001, close to zero. If nobs < nvars, the default is 0.01. A very small value of lambda.min.ratio will lead to a saturated fit in the nobs < nvars case. This is undefined for “binomial” and “multinomial” models, and glmnet will exit gracefully when the percentage deviance explained is almost 1.
lambdaA user supplied lambda sequence. Typical usage is to have the program compute its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this. WARNING: use with care. Avoid supplying a single value for lambda (for predictions after CV use predict() instead). Supply instead a decreasing sequence of lambda values. glmnet relies on its warms starts for speed, and its often faster to fit a whole path than compute a single fit.
standardizeLogical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is standardize=TRUE. If variables are in the same units already, you might not wish to standardize. See details below for y standardization with family=“gaussian”.
interceptShould intercept(s) be fitted (default=TRUE) or set to zero (FALSE)
threshConvergence threshold for coordinate descent. Each inner coordinate-descent loop continues until the maximum change in the objective after any coefficient update is less than thresh times the null deviance. **Defaults value is 1E-7. **
dfmax** Limit the maximum number of variables in the model.** Useful for very large nvars, if a partial path is desired.
pmaxLimit the maximum number of variables ever to be nonzero.
excludeIndices of variables to be excluded from the model.** Default is none. Equivalent to an infinite penalty factor (next item).
penalty.factorSeparate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables (and implicitly infinity for variables listed in exclude). Note: the penalty factors are internally rescaled to sum to nvars, and the lambda sequence will reflect this change.
lower.limitsVector of lower limits for each coefficient; default -Inf. Each of these must be non-positive. Can be presented as a single value (which will then be replicated), else a vector of length nvars
upper.limitsVector of upper limits for each coefficient; default Inf. See lower.limits
maxitMaximum number of passes over the data for all lambda values; default is 10^5.
type.gaussianTwo algorithm types are supported for (only) family=“gaussian”. The default when nvar<500 is type.gaussian=“covariance”, and saves all inner products ever computed. This can be much faster than type.gaussian=“naive”, which loops through nobs every time an inner-product is computed. The latter can be far more efficient for nvar >> nobs situations, or when nvar > 500.
type.logisticIf “Newton” then the exact hessian is used (default), while “modified.Newton” uses an upper-bound on the hessian, and can be faster.
standardize.responseThis is for the family=“mgaussian” family, and allows the user to standardize the response variables
type.multinomialIf “grouped” then a grouped lasso penalty is used on the multinomial coefficients for a variable. This ensures they are all in our out together. The default is “ungrouped”
relaxIf TRUE then for each active set in the path of solutions, the model is refit without any regularization. See details for more information. This argument is new, and users may experience convergence issues with small datasets, especially with non-gaussian families. Limiting the value of ’maxp’ can alleviate these issues in some cases.
trace.itIf trace.it=1, then a progress bar is displayed; useful for big models that take a long time to fit.
Additional argument used in relax.glmnet. These include some of the original arguments to ’glmnet’, and each must be named if used.
fitFor relax.glmnet a fitted ’glmnet’ object
maxpa limit on how many relaxed coefficients are allowed. Default is ’n-3’, where ’n’ is the sample size. This may not be sufficient for non-gaussian familes, in which case users should supply a smaller value. This argument can be supplied directly to ’glmnet’.
pathSince glmnet does not do stepsize optimization, the Newton algorithm can get stuck and not converge, especially with relaxed fits. With path=TRUE, each relaxed fit on a particular set of variables is computed pathwise using the original sequence of lambda values (with a zero attached to the end). Not needed for Gaussian models, and should not be used unless needed, since will lead to longer compute times. Default is path=FALSE. appropriate subset of variables
check.argsShould relax.glmnet make sure that all the data dependent arguments used in creating ’fit’ have been resupplied. Default is ’TRUE’.

注意事项

The objective funciton for “gaussian” is
1 2 n RSS + λ ∗ penalty . \frac{1}{2 n}\text{RSS}+\lambda* \text{penalty}. 2n1RSS+λpenalty.
For the other models it is
− 1 n logliklihood + λ ∗ penalty . -\frac{1}{n}\text{logliklihood}+\lambda* \text{penalty}. n1logliklihood+λpenalty.

返回值

callthe call that produced this object
a0Intercept sequence of length length(lambda)
betaFor “elnet”, “lognet”, “fishnet” and “coxnet” models, a nvars x length(lambda) matrix of coefficients, stored in sparse column format (“CsparseMatrix”). For “multnet” and “mgaussian”, a list of nc such matrices, one for each class.
lambdaThe actual sequence of lambda values used. When alpha=0, the largest lambda reported does not quite give the zero coefficients reported (lambda=inf would in principle). Instead, the largest lambda for alpha=0.001 is used, and the sequence of lambda values is derived from this.
dev.ratioThe fraction of (null) deviance explained (for “elnet”, this is the R-square). The deviance calculations incorporate weights if present in the model. The deviance is defined to be 2*(loglike_sat - loglike), where loglike_sat is the loglikelihood for the saturated model (a model with a free parameter per observation). Hence dev.ratio=1-dev/nulldev.
nulldevNull deviance (per observation). This is defined to be 2*(loglike_sat -loglike(Null)); The NULL model refers to the intercept model, except for the Cox, where it is the 0 model.
dfThe number of nonzero coefficients for each value of lambda. For “multnet”, this is the number of variables with a nonzero coefficient for any class.
dfmatFor “multnet” and “mrelnet” only. A matrix consisting of the number of nonzero coefficients per class
dimdimension of coefficient matrix (ices)
nobsnumber of observations
npassestotal passes over the data summed over all lambda values
offseta logical variable indicating whether an offset was included in the model
jerrerror flag, for warnings and errors (largely for internal debugging).
relaxedIf relax=TRUE, this additional item is another glmnet object with different values for beta and dev.ratio

cv.glmnet()

cv.glmnet(x, y,family = c("gaussian", "binomial", "poisson", "multinomial",
"cox", "mgaussian"), weights = NULL, offset = NULL, lambda = NULL,
type.measure = c("default", "mse", "deviance", "class", "auc", "mae",
"C"), nfolds = 10, foldid = NULL, alignment = c("lambda",
"fraction"), grouped = TRUE, keep = FALSE, parallel = FALSE,
gamma = c(0, 0.25, 0.5, 0.75, 1), relax = FALSE, trace.it = 0, ...)

功能介绍

在glmnet函数中加入了k折cv的方法,产生了一个plot,也返回了lambda(Gamma if relax=TRUE)的值

参数介绍

lambdaOptional user-supplied lambda sequence; default is NULL, and glmnet chooses its own sequence
type.measureloss to use for cross-validation. Currently five options, not all available for all models. The default is type.measure=“deviance”, which uses squared-error for gaussian models (也就是-2log-likelihood)(a.k.a type.measure=“mse” there), deviance for logistic and poisson regression, and partial-likelihood for the Cox model. type.measure=“class” applies to binomial and multinomial logistic regression only, and gives misclassification error. type.measure=“auc” is for two-class logistic regression only, and gives area under the ROC curve. type.measure=“mse” or ype.measure=“mae”(mean absolute error) can be used by all models except the “cox”; they measure the deviation from the fitted mean to the response. type.measure=“C” is Harrel’s concordance measure, only available for cox models.
nfoldsnumber of folds - default is 10. Although nfolds can be as large as the sample size (leave-one-out CV), it is not recommended for large datasets. Smallest value allowable is nfolds=3.
foldidan optional vector of values between 1 and nfold identifying what fold each observation is in. If supplied, nfold can be missing.
alignmentThis is an experimental argument, designed to fix the problems users were having with CV, with possible values “lambda” (the default) else “fraction”. With “lambda” the lambda values from the master fit (on all the data) are used to line up the predictions from each of the folds. In some cases this can give strange values, since the effective lambda values in each fold could be quite different.
groupedThis is an experimental argument, with default TRUE, and can be ignored by most users. For all models except the “cox”, this refers to computing nfolds separate statistics, and then using their mean and estimated standard error to describe the CV curve. If grouped=FALSE, an error matrix is built up at the observation level from the predictions from the nfold fits, and then summarized (does not apply to type.measure=“auc”). For the “cox” family, grouped=TRUE obtains the CV partial likelihood for the Kth fold by subtraction; by subtracting the log partial likelihood evaluated on the full dataset from that evaluated on the on the (K-1)/K dataset. This makes more efficient use of risk sets. With grouped=FALSE the log partial likelihood is computed only on the Kth fold keep If keep=TRUE, a prevalidated array is returned containing fitted values for each observation and each value of lambda. This means these fits are computed with this observation and the rest of its fold omitted. The folid vector is also returned. Default is keep=FALSE. If relax=TRUE, then a list of such arrays is returned, one for each value of ’gamma’. Note: if the value ’gamma=1’ is omitted, this case is included in the list since it corresponds to the original ’glmnet’
fit.
parallelIf TRUE, use parallel foreach to fit each fold. Must register parallel before hand, such as doMC or others. See the example below.
gammaThe values of the parameter for mixing the relaxed fit with the regularized fit, between 0 and 1; default is gamma = c(0,0.25,0.5,0.75,1)
relaxIf TRUE, then CV is done with respect to the mixing parameter gamma as well as lambda. Default is relax=FALSE.
trace.itIf trace.it=1, then progress bars are displayed; useful for big models that take a long time to fit. Limited tracing if parallel=TRUE
Other arguments that can be passed to glmnet

注意事项

cv.glmnet()并不能寻找 α \alpha α的值。如果relax=TRUE那么gamma在训练模型中就会被用到。如果 η \eta η是用来拟合lasso/elastic net, η R \eta_R ηR是the relaxed fit(没有惩罚项的系数),那么a relaxed fit mixed by gamma is
η ( γ ) = ( 1 − γ ) η R + γ η . \eta(\gamma)=(1-\gamma)\eta_R+\gamma \eta. η(γ)=(1γ)ηR+γη.
CV可以同时选择这两个参数。

返回值

lambdathe values of lambda used in the fits.
cvmThe mean cross-validated error - a vector of length length(lambda).
cvsdestimate of standard error of cvm.
cvupupper curve = cvm+cvsd.
cvlolower curve = cvm-cvsd.
nzeronumber of non-zero coefficients at each lambda.
namea text string indicating type of measure (for plotting purposes).
glmnet.fita fitted glmnet object for the full data.
lambda.minvalue of lambda that gives minimum cvm.
lambda.1selargest value of lambda such that error is within 1 standard error of the minimum.
fit.prevalif keep=TRUE, this is the array of prevalidated fits. Some entries can be NA, if that and subsequent values of lambda are not reached for that fold
foldidif keep=TRUE, the fold assignments used
relaxedif relax=TRUE, this additional item has the CV info for each of the mixed fits.

plot()

## S3 method for class 'glmnet'
plot(x, xvar = c("norm", "lambda", "dev"),
label = FALSE, ...)
## S3 method for class 'mrelnet'
plot(x, xvar = c("norm", "lambda", "dev"),
label = FALSE, type.coef = c("coef", "2norm"), ...)
## S3 method for class 'multnet'
plot(x, xvar = c("norm", "lambda", "dev"),
label = FALSE, type.coef = c("coef", "2norm"), ...)
## S3 method for class 'relaxed'
plot(x, xvar = c("lambda", "dev"), label = FALSE,
gamma = 1, ...)

功能介绍

Produces a coefficient profile plot of the coefficient paths for a fitted “glmnet” object.

参数介绍

xfitted “glmnet” model
xvarWhat is on the X-axis. “norm” plots against the L1-norm of the coefficients, “lambda” against the log-lambda sequence, and “dev” against the percent deviance explained.
Other graphical parameters to plot
type.coefIf type.coef=“2norm” then a single curve per variable, else if type.coef=“coef”, a coefficient plot per response
gammaValue of the mixing parameter for a “relaxed” fit

Predict()

## S3 method for class 'cv.glmnet'
predict(object, newx, s = c("lambda.1se",
"lambda.min"), ...)
## S3 method for class 'cv.relaxed'
predict(object, newx, s = c("lambda.1se",
"lambda.min"), gamma = c("gamma.1se", "gamma.min"), ...)

函数功能

This function makes predictions from a cross-validated glmnet model, using the stored “glmnet.fit” object, and the optimal value chosen for lambda (and gamma for a ’relaxed’ fit.

参数介绍

objectFitted “cv.glmnet” or “cv.relaxed” object.
newxMatrix of new values for x at which predictions are to be made. Must be a matrix; can be sparse as in Matrix package. See documentation for predict.glmnet.
sValue(s) of the penalty parameter lambda at which predictions are required. Default is the value s=“lambda.1se” stored on the CV object. Alternatively s=“lambda.min” can be used. If s is numeric, it is taken as the value(s) of lambda to be used. (For historical reasons we use the symbol ’s’ rather than ’lambda’ to reference this parameter)
Not used. Other arguments to predict.
gammaValue (single) of ’gamma’ at which predictions are to be made

print()

## S3 method for class 'cv.glmnet'
print(x, digits = max(3, getOption("digits") - 3),
...)

函数功能

Print a summary of the results of cross-validation for a glmnet model.

参数介绍

xfitted ’cv.glmnet’ object
digitssignificant digits in printout
additional print arguments

常见错误

  • Error in lognet(x, is.sparse, ix, jx, y, weights, offset, alpha, nobs, : (串列)对象不能强制改变成’double’种类。
    原因:这是因为x不是矩阵(maxtrix)类型。

参考来源:
https://web.stanford.edu/~hastie/Papers/glmnet.pdf
https://cran.rstudio.com/

https://blog.csdn.net/orchidzouqr/article/details/53582801

Logo

开放原子开发者工作坊旨在鼓励更多人参与开源活动,与志同道合的开发者们相互交流开发经验、分享开发心得、获取前沿技术趋势。工作坊有多种形式的开发者活动,如meetup、训练营等,主打技术交流,干货满满,真诚地邀请各位开发者共同参与!

更多推荐