mgcv {mgcv}R Documentation

Multiple Smoothing Parameter Estimation by GCV or UBRE

Description

Function to efficiently estimate smoothing parameters in Generalized Ridge Regression Problem with multiple (quadratic) penalties, by GCV or UBRE. The function uses Newton's method in multi-dimensions.

Usage

mgcv(M)

Arguments

The function takes list argument M with the following elements:

M$y The response data vector
M$X The design matrix for the problem, note that ncol(M$X) must give the number of model parameters, while nrow(M$X) should give the number of data.
M$C Matrix containing any linear equality constraints on the problem (i.e. C in Cp=0).
M$S A 3 dimensional array. M$S[i] is a 2 dimensional array containing the elements of the ith penalty matrix. Note that to economize on storage space, rows and columns containing only zeroes need not be stored. It is assumed that M$S[i][j][k] contains element j+M$off[i],k+M$off[i] of the ith penalty matrix, S_i. It is also assumed that only the first M$df[i] rows and columns of M$S[i] contain non-zero elements. dim(M$S)[1] must evaluate to the number of penalties, or (only if there are no penalties) to NULL.
M$off Offset values locating the elements of M$S[] in the correct location within each penalty coefficient matrix.
M$fix M$fix[i] is FALSE if the corresponding smoothing parameter is to be estimated, or TRUE if it is to be fixed at zero.
M$df M$df[i] gives the number of rows and columns of M$S[i] that are non-zero.
M$sig2 This serves 2 purposes. If it is strictly positive then UBRE is used with sig2 taken as the known error variance. If it is non-positive then GCV is used (and an estimate of the error variance returned). Note that it is assumed that the variance of y_i is given by sig2/w_i.
M$sp An array of initial smoothing parameter estimates if any of these is not strictly positive then automatic generation of initial values is used.

Details

This is documentation for the code implementing the method described in section 4 of Wood (2000) . The method is a computationally efficient means of applying GCV to the problem of smoothing parameter selection in generalized ridge regression problems of the form:

minimise || W^0.5 (Xp-y) ||^2 rho + lambda_1 p'S_1 p + lambda_1 p'S_2 p + . . .

possibly subject to constraints Cp=0. X is a design matrix, p a parameter vector, y a data vector, W a diagonal weight matrix, S_i a positive semi-definite matrix of coefficients defining the ith penalty and C a matrix of coefficients defining any linear equality constraints on the problem. The smoothing parameters are the lambda_i but there is an overall smoothing parameter rho as well. Note that X must be of full column rank, at least when projected into the null space of any equality constraints.

The method operates by alternating very efficient direct searches for rho with Newton updates of the logs of the lambda_i

The method is coded in C and is intended to be portable. It should be noted that seriously ill conditioned problems (i.e. with close to column rank deficiency in the design matrix) may cause problems, especially if weights vary wildly between observations.

Value

The function returns a list of the same form as the input list with the following elements added/modified:

M$p The best fit parameters given the estimated smoothing parameters.
M$sp The estimated smoothing parameters (lambda_i/rho)
M$sig2 The estimated (GCV) or supplied (UBRE) error variance.

WARNING

The method may not behave well with near column rank defficient X especially in contexts where the weights vary wildly.

Author(s)

Simon N. Wood snw@st-and.ac.uk

References

Gu and Wahba (1991) Minimizing GCV/GML scores with multiple smoothing parameters via the Newton method. SIAM J. Sci. Statist. Comput. 12:383-398

Wood (2000) Modelling and Smoothing Parameter Estimation with Multiple Quadratic Penalties. JRSSB 62(2):413-428

http://www.ruwpa.st-and.ac.uk/simon.html

See Also

GAMsetup gam

Examples


 # This example modified from routine SANtest()
    
    n<-100 # number of observations to simulate
    x <- runif(4 * n, 0, 1) # simulate covariates
    x <- array(x, dim = c(4, n)) # put into array for passing to GAMsetup
    pi <- asin(1) * 2  # begin simulating some data
    y <- 2 * sin(pi * x[1, ])
    y <- y + exp(2 * x[2, ]) - 3.75887
    y <- y + 0.2 * x[3, ]^11 * (10 * (1 - x[3, ]))^6 + 10 * (10 *
        x[3, ])^3 * (1 - x[3, ])^10 - 1.396
    sig2<- -1    # set magnitude of variance
    e <- rnorm(n, 0, sqrt(abs(sig2)))
    y <- y + e          # simulated data
    w <- matrix(1, n, 1) # weight matrix
    par(mfrow = c(2, 2)) # scatter plots of simulated data   
    plot(x[1, ], y)
    plot(x[2, ], y)
    plot(x[3, ], y)
    plot(x[4, ], y)
    G <- list(m = 4, n = n, nsdf = 0, df = c(15, 15, 15, 15),
        x = x) # creat list for passing to GAMsetup
    H <- GAMsetup(G)
    H$y <- y    # add data to H
    H$sig2 <- sig2  # add variance (signalling GCV use in this case) to H
        H$sp <-array(-1,H$m)
        H$fix<-array(FALSE,H$m)
    H$w <- w       # add weights to H
    H <- mgcv(H)  # select smoothing parameters and fit model (no constant)