Family {mboost} | R Documentation |
boost_family
objects provide a convenient way to specify loss functions
and corresponding risk functions to be optimized by one of the boosting
algorithms implemented in this package.
Family(ngradient, loss = NULL, risk = NULL, offset = function(y, w) optimize(risk, interval = range(y), y = y, w = w)$minimum, check_y = function(y) y, weights = c("any", "none", "zeroone", "case"), nuisance = function() return(NA), name = "user-specified", fW = NULL, response = function(f) NA, rclass = function(f) NA) AdaExp() AUC() Binomial(link = c("logit", "probit"), ...) GaussClass() GaussReg() Gaussian() Huber(d = NULL) Laplace() Poisson() GammaReg(nuirange = c(0, 100)) CoxPH() QuantReg(tau = 0.5, qoffset = 0.5) ExpectReg(tau = 0.5) NBinomial(nuirange = c(0, 100)) PropOdds(nuirange = c(-0.5, -1), offrange = c(-5, 5)) Weibull(nuirange = c(0, 100)) Loglog(nuirange = c(0, 100)) Lognormal(nuirange = c(0, 100))
ngradient |
a function with arguments |
loss |
an optional loss function with arguments |
risk |
an optional risk function with arguments |
offset |
a function with argument |
fW |
transformation of the fit for the diagonal weights matrix for an approximation of the boosting hat matrix for loss functions other than squared error. |
response |
inverse link function of a GLM or any other transformation on the scale of the response. |
rclass |
function to derive class predictions from conditional class probabilities (for models with factor response variable). |
check_y |
a function for checking and transforming the class / mode of a response variable. |
nuisance |
a function for extracting nuisance parameters from the family. |
weights |
a character indicating if weights are allowed. |
name |
a character giving the name of the loss function for pretty printing. |
link |
link function for binomial family. Alternatively,
one may supply the name of a distribution (for example
|
d |
delta parameter for Huber loss function. If omitted, it is chosen adaptively. |
tau |
the quantile or expectile to be estimated, a number strictly between 0 and 1. |
qoffset |
quantile of response distribution to be used as offset. |
nuirange |
a vector containing the end-points of the interval to be
searched for the minimum risk w.r.t. the nuisance parameter.
In case of |
offrange |
interval to search for offset in. |
... |
additional arguments to link functions. |
The boosting algorithm implemented in mboost
minimizes the
(weighted) empirical risk function risk(y, f, w)
with respect to f
.
By default, the risk function is the weighted sum of the loss function loss(y, f)
but can be chosen arbitrarily. The ngradient(y, f)
function is the negative
gradient of loss(y, f)
with respect to f
.
Pre-fabricated functions for the most commonly used loss functions are
available as well. Buehlmann and Hothorn (2007) give a detailed
overview of the available loss functions. The offset
function
returns the population minimizers evaluated at the response, i.e.,
1/2 \log(p / (1 - p)) for Binomial()
or AdaExp()
and (∑ w_i)^{-1} ∑ w_i y_i for Gaussian()
and the
median for Huber()
and Laplace()
. A short summary of the
available families is given in the following paragraphs:
AdaExp()
, Binomial()
and AUC()
implement
families for binary classification. AdaExp()
uses the
exponential loss, which essentially leads to the AdaBoost algorithm
of Freund and Schapire (1996). Binomial()
implements the
negative binomial log-likelihood of a logistic regression model
as loss function. Thus, using Binomial
family closely corresponds
to fitting a logistic model. Alternative link functions
can be specified via the name of the corresponding distribution, for
example link = "cauchy"
lead to pcauchy
used as link function. This feature is still experimental and
not well tested.
However, the coefficients resulting from boosting with family
Binomial(link = "logit")
are 1/2 of the coefficients of a logit model
obtained via glm
. This is due to the internal recoding
of the response to -1 and +1 (see below).
However, Buehlmann and Hothorn (2007) argue that the
family Binomial
is the preferred choice for binary
classification. For binary classification problems the response
y
has to be a factor
. Internally y
is re-coded
to -1 and +1 (Buehlmann and Hothorn 2007).
AUC()
uses 1-AUC(y, f) as the loss function.
The area under the ROC curve (AUC) is defined as
AUC = (n_{-1} n_1)^{-1} ∑_{i: y_i = 1} ∑_{j: y_j = -1} I(f_i > f_j).
Since this is not differentiable in f
, we approximate the jump function
I((f_i - f_j) > 0) by the distribution function of the triangular
distribution on [-1, 1] with mean 0, similar to the logistic
distribution approximation used in Ma and Huang (2005).
Gaussian()
is the default family in mboost
. It
implements L_2Boosting for continuous response. Note
that families GaussReg()
and GaussClass()
(for regression
and classification) are deprecated now.
Huber()
implements a robust version for boosting with
continuous response, where the Huber-loss is used. Laplace()
implements another strategy for continuous outcomes and uses the
L_1-loss instead of the L_2-loss as used by
Gaussian()
.
Poisson()
implements a family for fitting count data with
boosting methods. The implemented loss function is the negative
Poisson log-likelihood. Note that the natural link function
\log(μ) = η is assumed. The default step-site nu = 0.1
is probably too large for this family (leading to
infinite residuals) and smaller values are more appropriate.
GammaReg()
implements a family for fitting nonnegative response
variables. The implemented loss function is the negative Gamma
log-likelihood with logarithmic link function (instead of the natural
link).
CoxPH()
implements the negative partial log-likelihood for Cox
models. Hence, survival models can be boosted using this family.
QuantReg()
implements boosting for quantile regression, which is
introduced in Fenske et al. (2009). ExpectReg
works in analogy,
only for expectiles, which were introduced to regression by Newey and Powell (1987).
Families with an additional scale parameter can be used for fitting
models as well: PropOdds()
leads to proportional odds models
for ordinal outcome variables. When using this family, an ordered set of
threshold parameters is re-estimated in each boosting iteration.
NBinomial()
leads to regression models with a negative binomial
conditional distribution of the response. Weibull()
, Loglog()
,
and Lognormal()
implement the negative log-likelihood functions
of accelerated failure time models with Weibull, log-logistic, and
lognormal distributed outcomes, respectively. Hence, parametric survival
models can be boosted using these families. For details see Schmid and
Hothorn (2008) and Schmid et al. (2010).
An object of class boost_family
.
The coefficients resulting from boosting with family
Binomial
are 1/2 of the coefficients of a logit model
obtained via glm
. This is due to the internal recoding
of the response to -1 and +1 (see above).
For AUC()
, variables should be centered and scaled and observations with weight > 0 must not contain missing values.
The estimated coefficients for AUC()
have no probabilistic interpretation.
ExpectReg()
was donated by Fabian Sobotka.
AUC()
was donated by Fabian Scheipl.
Peter Buehlmann and Torsten Hothorn (2007), Boosting algorithms: regularization, prediction and model fitting. Statistical Science, 22(4), 477–505.
Nora Fenske, Thomas Kneib, and Torsten Hothorn (2011), Identifying risk factors for severe childhood malnutrition by boosting additive quantile regression. Journal of the American Statistical Association, 106:494-510.
Yoav Freund and Robert E. Schapire (1996), Experiments with a new boosting algorithm. In Machine Learning: Proc. Thirteenth International Conference, 148–156.
Shuangge Ma and Jian Huang (2005), Regularized ROC method for disease classification and biomarker selection with microarray data. Bioinformatics, 21(24), 4356–4362.
Whitney K. Newey and James L. Powell (1987), Asymmetric least squares estimation and testing. Econometrika, 55, 819–847.
Matthias Schmid and Torsten Hothorn (2008), Flexible boosting of accelerated failure time models. BMC Bioinformatics, 9(269).
Matthias Schmid, Sergej Potapov, Annette Pfahlberg, and Torsten Hothorn (2010). Estimation and regularization techniques for regression models with multidimensional prediction functions. Statistics and Computing, 20, 139-150.
mboost
for the usage of Family
s. See
boost_family-class
for objects resulting from a call to Family
.
Laplace() MyGaussian <- function(){ Family(ngradient = function(y, f, w = 1) y - f, loss = function(y, f) (y - f)^2, name = "My Gauss Variant") }